Compare commits

...

242 Commits

Author SHA1 Message Date
dependabot[bot] afb2e20a79 chore(deps): bump distroless/cc-debian13 from 84fcd3c to 9c4fe23
Bumps distroless/cc-debian13 from `84fcd3c` to `9c4fe23`.

---
updated-dependencies:
- dependency-name: distroless/cc-debian13
  dependency-version: nonroot
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-20 01:09:21 +00:00
RoomWithOutRoof 776e5947ef fix(docker): default CMD to daemon instead of gateway (#3897)
Change default Docker CMD from gateway to daemon in both dev and release stages. Gateway only starts the HTTP/WebSocket server — channel listeners are never spawned. Daemon starts the full runtime: gateway + channels + heartbeat + scheduler.

Closes #3893
2026-03-19 21:08:53 -04:00
Darren.Zeng e2183c89a3 dev: add Justfile for convenient development commands (#3874)
Add a Justfile providing convenient shortcuts for common development
tasks. Just is a modern command runner that serves as a better Make
alternative for project-specific commands.

Commands provided:
- just fmt / just fmt-check - Format code
- just lint - Run clippy
- just test / just test-lib - Run tests
- just ci - Run full CI quality gate locally
- just build / just build-debug - Build the project
- just dev [args] - Run zeroclaw in development mode
- just doc - Generate and open documentation
- just audit / just deny - Security and license checks
- just fmt-toml - Format TOML files with taplo

This complements the existing shell scripts in dev/ and scripts/
by providing a more discoverable command interface.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-19 21:08:16 -04:00
Darren.Zeng 29dc1172c0 style: add taplo.toml for TOML file formatting consistency (#3873)
Add Taplo configuration file to ensure consistent formatting of TOML
files across the project. Taplo is a TOML formatter and linter that
helps maintain consistent style in configuration files.

Configuration includes:
- 2-space indentation to match project style
- Comment alignment for better readability
- Trailing newline enforcement
- Sorting rules for dependencies and features

This complements rustfmt.toml and .editorconfig to provide complete
formatting coverage for all file types in the project.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-19 21:08:08 -04:00
Darren.Zeng e79e1b88b7 dev: add .gitattributes for consistent line endings and file handling (#3875)
Add comprehensive .gitattributes file to ensure consistent line endings
and proper file handling across different operating systems and editors.

Key features:
- Automatic line ending normalization (LF for most files)
- Language detection hints for GitHub Linguist
- Binary file declarations to prevent text processing
- CRLF handling for Windows-specific files (.ps1, .sln)
- Mark Cargo.lock as generated to hide diff noise in PRs

This helps prevent issues with:
- Mixed line endings in the repository
- Incorrect language statistics on GitHub
- Binary files being corrupted by text processing
- Unnecessary merge conflicts due to line ending differences

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-19 21:08:02 -04:00
Reaster f886ce47e9 fix(docker): default CMD to daemon instead of gateway (#3893)
The `gateway` command only starts the HTTP/WebSocket server without
channel listeners. Users configuring channels (Matrix, Telegram, etc.)
in config.toml get no response because the channel sync loops are
never spawned. The `daemon` command starts the full runtime including
gateway, channels, heartbeat, and scheduler.

Co-authored-by: reaster <reaster@courrier.dev>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 21:07:53 -04:00
Nim G 6a30e24e7b feat(channel): add interrupt_on_new_message support for Discord (#3918)
* feat(channel): add /stop command to cancel in-flight tasks

Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.

Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
  optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
  semaphore acquisition so the target task is never replaced in the store;
  fires CancellationToken on the running task; sends reply via tokio::spawn
  using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
  now enter the in_flight_by_sender store, enabling /stop to reach them;
  auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
  Slack only) — this is a deliberate broadening, not a side effect

Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
  semantics analysis; Slack always sets thread_ts, Matrix only for replies)

Supersedes #2855

Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.

* feat(channel): add interrupt_on_new_message support for Discord

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-19 19:33:37 -04:00
Argenis 83587eea4a Merge pull request #3917 from theredspoon/feat/mattermost-interrupt-on-new-message
feat(channel): add interrupt_on_new_message support for Mattermost
2026-03-19 19:09:16 -04:00
Argenis 226b2282f5 Fix delegate agent timeout config regression (#4004)
* feat(delegate): make delegate timeout configurable via config.toml

Add configurable timeout options for delegate tool:
- timeout_secs: for non-agentic sub-agent calls (default: 120s)
- agentic_timeout_secs: for agentic sub-agent runs (default: 300s)

Previously these were hardcoded constants (DELEGATE_TIMEOUT_SECS
and DELEGATE_AGENTIC_TIMEOUT_SECS). Users can now customize them
in config.toml under [[delegate.agents]] section.

Fixes #3898

* feat(config): make delegate tool timeouts configurable via config.toml

This change makes the hardcoded 120s/300s delegate tool timeouts
configurable through the config file:

- Add [delegate] section to Config with timeout_secs and agentic_timeout_secs
- Add DelegateToolConfig struct for global default timeout values
- Add DEFAULT_DELEGATE_TIMEOUT_SECS (120) and DEFAULT_DELEGATE_AGENTIC_TIMEOUT_SECS (300) constants
- Remove hardcoded constants from delegate.rs
- Update tests to use constant values instead of magic numbers
- Update examples/config.example.toml with documentation

Closes #3898

* fix: keep delegate timeout fields as Option<u64> with global fallback

- Change DelegateAgentConfig.timeout_secs and agentic_timeout_secs from
  u64 to Option<u64> so per-agent overrides are truly optional
- Implement manual Default for DelegateToolConfig with correct values
  (120s and 300s) instead of derive(Default)
- Add DelegateToolConfig to DelegateTool struct and wire through
  constructors so agent timeouts fall back to global [delegate] config
- Add validation for delegate timeout values in Config::validate()
- Fix example config to use [agents.name] table syntax matching the
  HashMap<String, DelegateAgentConfig> schema
- Add missing timeout fields to all DelegateAgentConfig struct literals
  across codebase (doctor, swarm, model_routing_config, tools/mod)

* chore: trigger CI

* chore: retrigger CI

* fix: cargo fmt line wrapping in config/mod.rs

* fix: import timeout constants in delegate tests

* style: cargo fmt

---------

Co-authored-by: vincent067 <vincent067@outlook.com>
2026-03-19 18:54:33 -04:00
argenis de la rosa 7bf5f3edde fix: add missing mattermost field to test InterruptOnNewMessageConfig 2026-03-19 18:53:28 -04:00
Argenis 191192a104 add configurable allow_scripts audit option (#4001)
Co-authored-by: wangyingtao.10 <wangyingtao.10@jd.com>
2026-03-19 18:31:31 -04:00
argenis de la rosa 8b942853c4 merge: resolve conflicts with master after #3891 merge
Keep both the PR's Mattermost interrupt_on_new_message additions
and master's new fields (pending_new_sessions, prompt_config,
autonomy_level) from the /stop command PR (#3891).
2026-03-19 18:29:36 -04:00
Argenis 95473d83b5 Merge pull request #4006 from zeroclaw-labs/readme-updates
docs: add OpenClaw migration commands to all translated READMEs
2026-03-19 18:29:04 -04:00
Argenis b5668acf2f Merge pull request #3891 from theredspoon/feat/stop-command
feat(channel): add /stop command to cancel in-flight tasks
2026-03-19 18:24:04 -04:00
Argenis 2128c9db5b Fix Jira tool panics and dedup bug (#4003)
* feat: add Jira tool with get_ticket, search_tickets, and comment_ticket

Implements a new `jira` tool following the existing zeroclaw tool
conventions (Tool trait, SecurityPolicy, config-gated registration).

- get_ticket: configurable detail level (basic/basic_search/full/changelog)
  with response shaping; always in the default allowed_actions list
- search_tickets: JQL-based search with cursor pagination (nextPageToken);
  always returns basic_search shape; gated by allowed_actions
- comment_ticket: posts ADF comments with inline markdown-like syntax —
  @email mentions resolved to Jira accountId, **bold**, bullet lists,
  newlines; gated by allowed_actions and SecurityPolicy Act operation

Config: [jira] section with base_url, email, api_token (encrypted at
rest, falls back to JIRA_API_TOKEN env var), allowed_actions (default:
["get_ticket"]), and timeout_secs. Validated on load.

Tool description in tool_descriptions/en.toml documents all three
actions and the full comment syntax for the AI system prompt.

* fix: address jira tool code review findings

High priority:
- Validate issue_key against ^[A-Z][A-Z0-9]+-\d+$ before URL interpolation
  to prevent path traversal in get_ticket and comment_ticket

Medium priority:
- Add email guard in tool registration (mod.rs) to skip with a warning
  instead of registering a broken tool when jira.email is empty
- Shape comment_ticket response to return only id, author, created —
  avoids exposing internal Jira metadata to the AI
- Replace O(n²) comment matching in shape_basic with a HashMap lookup
  keyed by comment ID for O(1) access
- Add api_token validation in Config::validate() checking both the
  config field and JIRA_API_TOKEN env var when jira.enabled = true

Low priority:
- Make shape_basic_search private (was accidentally pub)
- Extend clean_email to strip leading punctuation (( and [) so that
  @(john@co.com) resolves correctly; fix suffix computation via pointer
  arithmetic to handle the shifted offset
- Clarify tool_descriptions/en.toml: @prefix is required for mentions,
  bare emails without @ are treated as plain text
- Handle unmatched ** in parse_inline: emit as literal text instead of
  silently producing a bold node with no closing marker

* fix(jira): allow lowercase project keys in issue_key validation

Relax validate_issue_key to accept both PROJ-123 and proj-123, since
some Jira instances use lowercase custom project keys. Path traversal
protection is preserved via alphanumeric + digit-number requirement.

* feat(tools): add tool honesty instructions to system prompt

Prevent AI from fabricating tool results by injecting a CRITICAL:
Tool Honesty section into both channel and CLI/agent system prompts.

Rules: never fabricate or guess tool results, report errors as-is,
and ask the user when unsure if a tool call succeeded.

* style: sort JiraConfig import alphabetically in config/mod.rs

* style(jira): fix strict clippy lints in jira_tool

- Derive Default for LevelOfDetails instead of manual impl
- Use char arrays in trim_start_matches/trim_end_matches
- Allow cast_possible_truncation on search_tickets (usize->u32 bounded by max_results)
- Remove needless borrow on &email

* fix(ci): adapt to upstream autonomy_level additions in channels/mod.rs

- Add missing autonomy_level argument to build_system_prompt_with_mode call in test
- Add missing autonomy_level field in ChannelRuntimeContext test initializer
- Allow large_futures in load_or_init test (Config struct growth from JiraConfig)

* fix(ci): resolve duplicate and missing autonomy_level in test initializers

* fix(ci): use TelegramRecordingChannel in telegram-specific test

The test process_channel_message_executes_tool_calls_instead_of_sending_raw_json
sent messages on channel "telegram" but registered RecordingChannel (name:
"test-channel"), causing the channel lookup to return None and no messages to
be sent.

* fix(jira): prevent panics on short dates, fix dedup bug, normalize base_url

- Add date_prefix() helper to safely slice date strings instead of
  panicking on empty or short strings from the Jira API.
- Replace Vec::dedup() with HashSet-based retain in extract_emails()
  to correctly deduplicate non-adjacent duplicates.
- Strip trailing slashes from base_url during construction to prevent
  double-slash URLs.
- Add tests for date_prefix and non-adjacent email dedup.

---------

Co-authored-by: Anatolii <anatolii@Anatoliis-MacBook.local>
Co-authored-by: Anatolii <anatolii.fesiuk@gmail.com>
2026-03-19 18:14:34 -04:00
argenis de la rosa 8d3d14f1e4 docs(readme): add OpenClaw migration commands to all translated READMEs
Every translated README now includes the migrate section:
  zeroclaw migrate openclaw --dry-run
  zeroclaw migrate openclaw

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 18:09:46 -04:00
Argenis 632d513c2e fix(agent): preserve native tool-call text in draft updates (#4005)
Preserve assistant text from native tool-call turns in draft updates. Falls back to response_text when parsed_text is empty and native tool calls are present. Relays text through on_delta for draft-capable channels like Telegram.

Supersedes #3976. Closes #3974
2026-03-19 18:07:25 -04:00
Argenis ade588b4ec feat(security): inject security policy summary into LLM system prompt (#4002)
Inject a human-readable summary of the active SecurityPolicy into the system prompt Safety section. LLM sees allowed commands, forbidden paths, autonomy level, and rate limits.

Supersedes #3968. Closes #2404
2026-03-19 17:54:12 -04:00
Dmitrii Mukhutdinov f7636ab81c fix(observability): handle missing OtelObserver match arms and add all-features CI check (#3981)
Add missing CacheHit/CacheMiss match arms to OtelObserver::record_event. Add check-all-features CI job to prevent --all-features compile regressions.
2026-03-19 17:48:35 -04:00
Martin d76e4e5a86 feat(channels): add TTS voice reply support to Telegram channel (#3942)
Adds text-to-speech output to the Telegram channel, mirroring the
existing WhatsApp Web voice-chat implementation. When a user sends a
voice note (transcribed via STT), the channel enters voice-chat mode
and subsequent agent replies are synthesised into a Telegram voice note
via the configured TTS provider, in addition to the normal text reply.
Sending a text message exits voice-chat mode.

Implementation details:
- Add `tts_config`, `voice_chats`, and `pending_voice` fields to
  `TelegramChannel`
- Add `with_tts()` builder method, gated on `config.enabled`
- Track voice-chat state: enter on successful STT transcription, exit
  on incoming text message
- `synthesize_and_send_voice()` static method: synthesises audio via
  `TtsManager` and uploads to Telegram using `sendVoice` multipart API
- 10-second debounce in `send()` ensures only the final substantive
  reply in a tool chain gets a voice note (skips JSON, code blocks,
  URLs, short status messages)
- Wire `.with_tts(config.tts.clone())` into both Telegram construction
  sites in the channel factory

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 17:35:22 -04:00
Argenis d9cea87fae Merge pull request #3998 from zeroclaw-labs/fix/images-2
fix(docs): absolute banner URLs + web dashboard logo update
2026-03-19 16:47:09 -04:00
argenis de la rosa 6213bcab07 fix(docs): use absolute URLs for banner in all READMEs + update web dashboard logo
- Replace relative docs/assets/zeroclaw-banner.png paths with absolute
  raw.githubusercontent.com URLs in all 31 README files so the banner
  renders correctly regardless of where the README is viewed
- Switch web dashboard favicon and logos from logo.png to zeroclaw-trans.png
- Add zeroclaw-trans.png and zeroclaw-banner.png assets
- Update build.rs to track new dashboard asset
- Fix missing autonomy_level in new test + Box::pin large future

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 16:34:32 -04:00
argenis de la rosa fe9f58f917 fix(docs): use absolute URLs for banner in all READMEs + update web dashboard logo
- Replace relative docs/assets/zeroclaw-banner.png paths with absolute
  raw.githubusercontent.com URLs in all 31 README files so the banner
  renders correctly regardless of where the README is viewed
- Switch web dashboard favicon and logos from logo.png to zeroclaw-trans.png
- Add zeroclaw-trans.png and zeroclaw-banner.png assets
- Update build.rs to track new dashboard asset

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 16:31:21 -04:00
Argenis 04c7ce4488 Merge pull request #3955 from Alix-007/issue-3952-full-autonomy-channel-prompt
fix(prompt): respect autonomy level in channel prompts
2026-03-19 16:12:13 -04:00
Argenis 5eea95ef2a Merge pull request #3899 from Alix-007/issue-3842-openrouter-timeout
fix(openrouter): respect provider_timeout_secs on slow responses
2026-03-19 16:06:57 -04:00
argenis de la rosa af1c37c2fb fix: pass autonomy_level through to prompt builder in wrapper function
build_system_prompt_with_mode was discarding the autonomy_level
parameter, passing None to build_system_prompt_with_mode_and_autonomy.
This caused full-autonomy prompts to still include "ask before acting"
instructions. Convert the level to an AutonomyConfig and pass it through.
2026-03-19 15:56:37 -04:00
argenis de la rosa e3e4aef21c fix: box-pin large future in config init test to satisfy clippy
Config::load_or_init() produces a future >16KB, triggering
clippy::large_futures. Wrap with Box::pin() as recommended.
2026-03-19 15:44:41 -04:00
argenis de la rosa a48e335be9 fix: box-pin large future in config init test to satisfy clippy
Config::load_or_init() produces a future >16KB, triggering
clippy::large_futures. Wrap with Box::pin() as recommended.
2026-03-19 15:44:19 -04:00
argenis de la rosa fba15520dc fix: add missing autonomy_level field to test ChannelRuntimeContext
The full_autonomy_prompt test was missing the autonomy_level field
added to ChannelRuntimeContext by a recently merged PR.
2026-03-19 15:32:23 -04:00
argenis de la rosa 7504da1117 fix: add missing autonomy_level arg to test after merge with master
The refresh-skills test was missing the autonomy_level parameter
added to build_system_prompt_with_mode and ChannelRuntimeContext
by a recently merged PR.
2026-03-19 15:31:18 -04:00
argenis de la rosa 6292cdfe1c Merge origin/master into issue-3952-full-autonomy-channel-prompt
Resolve conflict in src/channels/mod.rs Safety section. Keeps the
PR's AutonomyConfig-based prompt construction (build_system_prompt_with_mode_and_autonomy)
while incorporating master's granular safety rules (conditional
destructive-command and ask-before-acting lines based on autonomy level).
Also fixes missing autonomy_level arg in refresh-skills test and removes
duplicate autonomy.level args from auto-merged call sites.
2026-03-19 15:27:43 -04:00
argenis de la rosa 693661b564 Merge origin/master into issue-3842-openrouter-timeout
Resolve merge conflicts keeping the PR's changes:
- timeout_secs parameter in OpenRouterProvider::new()
- read_response_body + parse_response_body pattern
- OPENROUTER_CONNECT_TIMEOUT_SECS and DEFAULT_OPENROUTER_TIMEOUT_SECS constants
- Update master's new tests to use two-arg new() signature
2026-03-19 15:19:32 -04:00
Argenis 4daec8c0df Merge pull request #3288 from Alix-007/fix-2400-block-config-self-mutation
fix(security): block agent writes to runtime config state
2026-03-19 15:16:48 -04:00
Argenis 3cf609cb38 Merge pull request #3959 from Alix-007/issue-3706-read-skill
feat(skills): add read_skill for compact mode
2026-03-19 15:16:42 -04:00
Argenis e1b7d29f1b Merge pull request #3940 from Alix-007/issue-3845-refresh-skills-on-new
channel: refresh available skills after /new
2026-03-19 15:16:35 -04:00
Argenis fef69a4128 Merge pull request #3787 from Alix-007/issue-3774-path-normalization
fix(tools): normalize workspace-prefixed paths
2026-03-19 15:16:17 -04:00
Argenis 643b683c39 Merge pull request #3954 from Alix-007/issue-2901-zai-tool-stream
fix(zai): enable tool_stream for tool-capable requests
2026-03-19 15:15:59 -04:00
Argenis 74c93b0ebc Merge pull request #3943 from Alix-007/issue-3902-claude-code-test-race
test(claude_code): isolate echo script per test run
2026-03-19 15:15:52 -04:00
Argenis a7bf69d279 Merge pull request #3356 from Alix-007/fix/config-load-initialized-state
fix(config): report existing configs as initialized on load
2026-03-19 15:15:47 -04:00
Argenis f68af9a4c7 Merge pull request #3994 from zeroclaw-labs/fix/images
fix(docs): update banner image and add Instagram to all READMEs
2026-03-19 15:14:02 -04:00
Argenis cca3d66955 fix: add Node.js build dependency to Homebrew formula template (#3996)
The Homebrew publish workflow downloads a source tarball but never
ensures Node.js is available during the build. Without it, build.rs
cannot run npm to compile the web frontend, so Homebrew-installed
ZeroClaw ships without the web dashboard.

Add `depends_on "node" => :build` to the formula by inserting it
after the existing `depends_on "rust" => :build` line. This lets
build.rs detect npm and automatically run `npm ci && npm run build`
to produce the web/dist assets.

Fixes #3991
2026-03-19 15:13:56 -04:00
Argenis 95bf229225 fix(config): enable compact_context by default (#3995)
* fix: change compact_context default to true

Local LLMs with limited context windows immediately run out of context
when compact_context defaults to false. The system prompt alone can
consume 25K+ tokens, exceeding even 55K context windows with history.

Setting compact_context=true by default limits system prompt injection
to 6000 chars and RAG results to 2 chunks, making the agent usable
with smaller models out of the box.

Fixes #3987

* docs: update compact_context default to true in config reference

Update all locale variants (en, zh-CN, vi) to reflect the new default.

* test: update tests to expect compact_context default of true

Update assertions in schema.rs unit tests and config_persistence.rs
component tests to match the new default value.
2026-03-19 15:13:14 -04:00
argenis de la rosa ebe19147f2 fix(docs): update banner image and add Instagram badge to all READMEs
- Replace zeroclaw.png (broken/outdated) with zeroclaw-banner.png
  across all 30 translated README files
- Add Instagram social badge (@therealzeroclaw) to all translations

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 14:57:28 -04:00
Argenis e387f58579 Merge pull request #3951 from Alix-007/issue-3946-release-memory-postgres
build(release): include memory-postgres in published artifacts
2026-03-19 14:42:15 -04:00
Argenis fa06798926 fix(cron): persist allowed_tools for agent jobs (#3993)
Persist allowed_tools in cron_jobs table, threading it through CLI add/update and cron_add/cron_update tool APIs. Add regression coverage for store, tool, and CLI roundtrip paths.

Fixups over original PR #3929: add allowed_tools to all_overdue_jobs SELECT (merge gap), resolve merge conflicts.

Closes #3920
Supersedes #3929
2026-03-19 14:37:55 -04:00
Alix-007 a4cd4b287e feat(slack): add thread_replies channel option (#3930)
Add a thread_replies option to Slack channel config (default true). When false, replies go to channel root instead of the originating thread.

Closes #3888
2026-03-19 14:32:02 -04:00
argenis de la rosa 3ce7f2345e merge: resolve conflicts with master, include channel-lark in RELEASE_CARGO_FEATURES
Add channel-lark (merged to master separately) to RELEASE_CARGO_FEATURES
env var. Keep the DRY env-var approach and remove stale Docker build-args.
2026-03-19 14:24:30 -04:00
Argenis eb9dfc04b4 fix(anthropic): always apply cache_control to system prompts (#3990)
* fix: always use Blocks format for system prompts with cache_control

System prompts under 3KB were wrapped in SystemPrompt::String which
cannot carry cache_control headers, resulting in 0% cache hit rate
on Haiku 4.5. Always use SystemPrompt::Blocks with ephemeral
cache_control regardless of prompt size.

Fixes #3977

* fix: lower conversation caching threshold from >4 to >1 messages

The previous threshold of >4 non-system messages was too restrictive,
delaying cache benefits until 5+ turns. Lower to >1 so caching kicks
in after the first user+assistant exchange.

Fixes #3977

* test: update anthropic cache tests for new thresholds and Blocks format

- convert_messages_small_system_prompt now expects Blocks with
  cache_control instead of String variant
- should_cache_conversation tests updated for >1 threshold
- backward_compatibility test replaced with blocks-system test
2026-03-19 14:21:45 -04:00
Argenis 9cc74a2698 fix(security): wire sandbox into shell command execution (#3989)
* fix: add sandbox field to ShellTool struct

Add `sandbox: Arc<dyn Sandbox>` field to `ShellTool` and a
`new_with_sandbox()` constructor so callers can inject the configured
sandbox backend. The existing `new()` constructor defaults to
`NoopSandbox` for backward compatibility.

Ref: #3983

* fix: apply sandbox wrapping in ShellTool::execute()

Call `self.sandbox.wrap_command()` on the underlying std::process::Command
(via `as_std_mut()`) after building the shell command and before clearing
the environment. This ensures every shell command passes through the
configured sandbox backend before execution.

Ref: #3983

* fix: wire up sandbox creation at ShellTool callsites

In `all_tools_with_runtime()`, create a sandbox from
`root_config.security` via `create_sandbox()` and pass it to
`ShellTool::new_with_sandbox()`. The `default_tools_with_runtime()`
path retains `ShellTool::new()` which defaults to `NoopSandbox`.

Ref: #3983

* test: add sandbox integration tests for ShellTool

Verify that ShellTool can be constructed with a sandbox via
`new_with_sandbox()`, that NoopSandbox leaves commands unmodified,
and that command execution works end-to-end with a sandbox attached.

Ref: #3983
2026-03-19 14:21:42 -04:00
Argenis 133dc46b41 fix(web): restore accidentally deleted logo file (#3988)
* fix: restore accidentally deleted logo file

The logo.png was removed in commit 48bdbde2 but is still referenced
by the web UI components. Restore it from git history.

Fixes #3984

* fix: copy logo to web/dist for rust-embed

The Rust binary embeds files from web/dist/ via rust-embed, so the
logo must also be present there to be served without a rebuild.

Fixes #3984
2026-03-19 14:21:15 -04:00
Argenis ad03605cad Merge pull request #3949 from Alix-007/issue-3817-cron-delivery-context
fix: default cron delivery to the active channel context
2026-03-19 14:20:59 -04:00
Argenis ae1acf9b9c Merge pull request #3950 from Alix-007/issue-3466-homebrew-service-workspace
fix(onboard): warn when Homebrew services use another workspace
2026-03-19 14:20:56 -04:00
Alix-007 cc91f22e9b fix(skills): narrow shell shebang detection (#3944)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-19 14:10:51 -04:00
Martin 030f5fe288 fix(install): fix guided installer in LXC/container environments (#3947)
Replace subshell-based /dev/stdin probing in guided_input_stream with a
file-descriptor approach (guided_open_input) that works reliably in LXC
containers accessed over SSH.

The previous implementation probed /dev/stdin and /proc/self/fd/0 via
subshells before falling back to /dev/tty. In LXC containers these
probes fail even when FD 0 is perfectly usable, causing the guided
installer to exit with "requires an interactive terminal".

The fix:
- When stdin is a terminal (-t 0), assign GUIDED_FD=0 directly without
  any subshell probing — trusting the kernel's own tty check
- Otherwise, open /dev/tty as an explicit fd (exec {GUIDED_FD}</dev/tty)
- guided_read uses `read -u "$GUIDED_FD"` instead of `< "$file_path"`
- Add echo after silent reads (password prompts) for correct line handling

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 14:10:48 -04:00
Giulio V c47bbcc972 fix(cron): add startup catch-up and drop login shell flag (#3948)
* fix(cron): add startup catch-up and drop login shell flag

Problems:
1. When ZeroClaw started after downtime (late boot, daemon restart),
   overdue jobs were picked up via `due_jobs()` but limited by
   `max_tasks` per poll cycle — with many overdue jobs, catch-up
   could take many cycles.
2. Cron shell jobs used `sh -lc` (login shell), which loads the
   full user profile on every execution — slow and may cause
   unexpected side effects.

Fixes:
- Add `all_overdue_jobs()` store query without `max_tasks` limit
- Add `catch_up_overdue_jobs()` startup phase that runs ALL overdue
  jobs once before entering the normal polling loop
- Extract `build_cron_shell_command()` helper using `sh -c` (non-login)
- Add structured tracing for catch-up progress
- Add tests for all new functions

* feat(cron): make catch-up configurable via API and control panel

Add `catch_up_on_startup` boolean to `[cron]` config (default: true).
When enabled, the scheduler runs all overdue jobs at startup before
entering the normal polling loop. Users can toggle this from:

- The Cron page toggle switch in the control panel
- PATCH /api/cron/settings { "catch_up_on_startup": false }
- The `[cron]` section of the TOML config editor

Also adds GET /api/cron/settings endpoint to read cron subsystem
settings without parsing the full config.

* fix(config): add catch_up_on_startup to CronConfig test constructors

The CI Lint job failed because the `cron_config_serde_roundtrip` test
constructs CronConfig directly and was missing the new field.
2026-03-19 14:10:37 -04:00
Argenis 72fbb22059 Merge pull request #3985 from zeroclaw-labs/fix/aur-ssh-publish
fix(ci): harden AUR SSH key setup and add diagnostics
2026-03-19 13:44:01 -04:00
argenis de la rosa cbb3d9ae92 fix(ci): harden AUR SSH key setup and add diagnostics (#3952)
The AUR publish step fails with "Permission denied (publickey)".
Root cause is likely key formatting (Windows line endings from
GitHub secrets UI) or missing public key registration on AUR.

Changes:
- Normalize line endings (strip \r) when writing SSH key
- Set correct permissions on ~/.ssh (700) and ~/.ssh/config (600)
- Validate key with ssh-keygen before attempting clone
- Add SSH connectivity test for clearer error diagnostics

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 13:28:58 -04:00
Argenis 8d1eebad4d Merge pull request #3980 from zeroclaw-labs/version-bump-0.5.1
chore: bump version to 0.5.1
2026-03-19 09:56:36 -04:00
argenis de la rosa 0fdd1ad490 chore: bump version to 0.5.1
Release highlights:
- Autonomy enforcement in gateway and channel paths (#3952)
- conversational_ai startup warning for unimplemented config (#3958)
- Heartbeat default interval 30→5min (#3938)
- Provider timeout and error handling improvements (#3973, #3978)
- Docker/CI postgres and Lark feature fixes (#3971, #3933)
- Tool path resolution fix (#3937)
- OTP config fix (#3936)
- README: Instagram badge + banner image (#3979)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 09:45:32 -04:00
Argenis 86bc60fcd1 Merge pull request #3979 from zeroclaw-labs/readme
docs: add Instagram badge and banner to README
2026-03-19 09:42:00 -04:00
argenis de la rosa 4837e1fe73 docs(readme): add Instagram social badge and switch to banner image
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 09:30:54 -04:00
Argenis 985977ae0c fix(providers): exempt tool schema errors from non-retryable classification (#3978)
* fix: exempt tool schema validation errors from non-retryable classification

Groq returns 400 "tool call validation failed" which was classified as
non-retryable by is_non_retryable(), preventing the provider-level
fallback in compatible.rs from executing. Add is_tool_schema_error()
to detect these errors and return false from is_non_retryable(), allowing
the retry loop to pass control back to the provider's built-in fallback.

Fixes #3757

* test: add unit tests for tool schema error detection in reliable.rs

Verify is_tool_schema_error detects Groq-style validation failures and
that is_non_retryable returns false for tool schema 400s while still
returning true for other 400 errors like invalid API key.

* fix: escape format braces in test string literals for cargo check

The anyhow::anyhow! macro interprets curly braces as format
placeholders. Use explicit format argument to pass JSON-containing
strings in tests.
2026-03-19 09:25:49 -04:00
Argenis 72b10f12dd Merge pull request #3975 from zeroclaw-labs/agent-loop
fix: enforce autonomy level in gateway/channel paths + conversational_ai warning
2026-03-19 09:20:21 -04:00
Argenis 3239f5ea07 fix(ci): include channel-lark feature in precompiled release binaries (#3933) (#3972)
Add channel-lark to the cargo --features flag in all release and
cross-platform build workflows, and to the Docker build-args.  This
gives users Feishu/Lark channel support out of the box without needing
to compile from source.

The channel-lark feature depends only on dep:prost (pure Rust protobuf),
so it is safe to enable on all platforms (Linux, macOS, Windows, Android).
2026-03-19 09:15:10 -04:00
Argenis 3353729b01 fix(openrouter): respect provider_timeout_secs and improve error messages (#3973)
* fix(openrouter): wire provider_timeout_secs through factory

Apply the configured provider_timeout_secs to OpenRouterProvider
in the provider factory, matching the pattern used for compatible
providers.

* fix(openrouter): add timeout_secs field to OpenRouterProvider

Add a configurable timeout_secs field (default 120s) and a
with_timeout_secs() builder method so the HTTP client timeout
can be overridden via provider config instead of being hardcoded.

* refactor(openrouter): improve response decode error messages

Read the response body as text first, then parse with
serde_json::from_str so that decode failures include a truncated
snippet of the raw body for easier debugging.

* test(openrouter): add timeout_secs configuration tests

Verify that the default timeout is 120s and that with_timeout_secs
correctly overrides it.

* style: run rustfmt on openrouter.rs
2026-03-19 09:12:14 -04:00
argenis de la rosa b6c2930a70 fix(agent): enforce autonomy level in gateway and channel paths (#3952)
- Channel tool filtering (`non_cli_excluded_tools`) now respects
  `autonomy.level = "full"` — full-autonomy agents keep all tools
  available regardless of channel.
- Gateway `process_message` now creates and passes an `ApprovalManager`
  to `agent_turn`, so `ReadOnly`/`Supervised` policies are enforced
  instead of silently skipped.
- Gateway also applies `non_cli_excluded_tools` filtering with the same
  full-autonomy bypass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 08:56:45 -04:00
argenis de la rosa 181cafff70 fix(config): warn when conversational_ai.enabled is set (#3958)
The conversational_ai config section is parsed but not yet consumed by
any runtime code. Emit a startup warning so users know the setting is
ignored, and update the doc comment to mark it as reserved for future use.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 08:56:38 -04:00
Argenis d87f387111 fix(docker): add memory-postgres feature to Docker and CI builds (#3971)
The GHCR image v0.5.0 fails to start when users configure the postgres
memory backend because the binary was compiled without the
memory-postgres cargo feature. Add it to all build configurations:
Dockerfiles (default ARG), release workflows (cargo build --features),
and Docker push steps (ZEROCLAW_CARGO_FEATURES build-arg).

Fixes #3946
2026-03-19 08:51:20 -04:00
Argenis 7068079028 fix: make channel system prompt respect autonomy.level = full (#3952) (#3970)
When autonomy.level is set to "full", the channel/web system prompt no
longer includes instructions telling the model to ask for permission
before executing tools. Previously these safety lines were hardcoded
regardless of autonomy config, causing the LLM to simulate approval
dialogs in channel and web-interface modes even though the
ApprovalManager correctly allowed execution.

The fix adds an autonomy_level parameter to build_system_prompt_with_mode
and conditionally omits the "ask before acting" instructions when the
level is Full. Core safety rules (no data exfiltration, prefer trash)
are always included.
2026-03-19 08:48:38 -04:00
Argenis a9b511e6ec fix: omit experimental conversational_ai section from default config (#3969)
The [conversational_ai] config section was serialized into every
freshly-generated config.toml despite the feature being experimental
and not yet wired into the agent runtime. This confused new users who
found an undocumented section in their config.

Add skip_serializing_if = "ConversationalAiConfig::is_disabled" so the
section is only written when a user has explicitly enabled it. Existing
configs that already contain the section continue to deserialize
correctly via #[serde(default)].

Fixes #3958
2026-03-19 08:48:33 -04:00
Argenis 65cb4fe099 feat(heartbeat): default interval 30→5min + prune heartbeat from auto-save (#3938)
Lower the default heartbeat interval to 5 minutes to match the renewable
partial wake-lock cadence. Add `[heartbeat task` to the memory auto-save
skip filter so heartbeat prompts (both Phase 1 decision and Phase 2 task
execution) do not pollute persistent conversation memory.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-19 08:17:08 -04:00
Alix-007 1bbc159e0e style(zai): satisfy rustfmt in tool_stream request 2026-03-19 19:15:58 +08:00
Alix-007 0d28cca843 build(release): drop stale docker feature args 2026-03-19 19:14:07 +08:00
Alix-007 b1d20d38f9 feat(skills): add read_skill for compact mode 2026-03-19 17:53:40 +08:00
Alix-007 2bad6678ec fix(prompt): respect autonomy level in channel prompts 2026-03-19 16:54:51 +08:00
Alix-007 b6fe054915 fix(zai): send tool_stream for tool-capable requests 2026-03-19 16:32:06 +08:00
Alix-007 7ddd2aace3 build(release): ship postgres-capable release artifacts 2026-03-19 15:37:45 +08:00
Alix-007 c7b3b762e0 fix(onboard): warn when Homebrew service uses another workspace 2026-03-19 15:30:40 +08:00
Alix-007 4b00e8ba75 fix(cron): default channel delivery to active reply target 2026-03-19 15:11:47 +08:00
Alix-007 dd462a2b04 test(claude_code): isolate echo script per test run 2026-03-19 13:47:08 +08:00
Alix-007 2d68b880c2 Fix /new regression test lint scope 2026-03-19 12:19:31 +08:00
Alix-007 3a672a2ede Refresh skills after new channel sessions 2026-03-19 12:07:08 +08:00
Argenis 2e48cbf7c3 fix(tools): use resolve_tool_path for consistent path resolution (#3937)
Replace workspace_dir.join(path) with resolve_tool_path(path) in
file_write, file_edit, and pdf_read tools to correctly handle absolute
paths within the workspace directory, preventing path doubling.

Closes #3774
2026-03-18 23:51:35 -04:00
Argenis e4910705d1 fix(config): add missing challenge_max_attempts field to OtpConfig (#3919) (#3936)
The OtpConfig struct uses deny_unknown_fields but was missing the
challenge_max_attempts field, causing zeroclaw config schema to fail
with a TOML parse error when the field appeared in config files.

Add challenge_max_attempts as an Option<u32>-style field with a default
of 3 and a validation check ensuring it is greater than 0.
2026-03-18 23:48:53 -04:00
Argenis 1b664143c2 fix: move misplaced include key from [lib] to [package] in Cargo.toml (#3935)
The `include` array was placed after `[lib]` without a section header,
causing Cargo to parse it as `lib.include` — an invalid manifest key.
This triggered a warning during builds and caused lockfile mismatch
errors when building with --locked in Docker (Dockerfile.debian).

Move the `include` key to the `[package]` section where it belongs and
regenerate Cargo.lock to stay in sync.

Fixes #3925
2026-03-18 23:48:50 -04:00
Argenis 950f996812 Merge pull request #3926 from zeroclaw-labs/fix/pairing-code-terminal-display
fix(gateway): move pairing code below dashboard URL in terminal
2026-03-18 20:34:08 -04:00
argenis de la rosa b74c5cfda8 fix(gateway): move pairing code below dashboard URL in terminal banner
Repositions the one-time pairing code display to appear directly below
the dashboard URL for cleaner terminal output, and removes the duplicate
display that was showing at the bottom of the route list.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 19:50:26 -04:00
Argenis 02688eb124 feat(skills): autonomous skill creation from multi-step tasks (#3916)
Add SkillCreator module that persists successful multi-step task
executions as reusable SKILL.toml definitions under the workspace
skills directory.

- SkillCreationConfig in [skills.skill_creation] (disabled by default)
- Slug validation, TOML generation, embedding-based deduplication
- LRU eviction when max_skills limit is reached
- Agent loop integration post-success
- Gated behind `skill-creation` compile-time feature flag

Closes #3825.
2026-03-18 17:15:02 -04:00
Argenis 2c92cf913b fix: ensure SOUL.md and IDENTITY.md exist in non-tty sessions (#3915)
When the workspace is created outside of `zeroclaw onboard` (e.g., via
cron, daemon, or `< /dev/null`), SOUL.md and IDENTITY.md were never
scaffolded, causing the agent to activate without identity files.

Added `ensure_bootstrap_files()` in `Config::load_or_init()` that
idempotently creates default SOUL.md and IDENTITY.md if missing.

Closes #3819.
2026-03-18 17:12:44 -04:00
Argenis 3c117d2d7b feat(delegate): make sub-agent timeouts configurable via config.toml (#3909)
Add `timeout_secs` and `agentic_timeout_secs` fields to
`DelegateAgentConfig` so users can tune per-agent timeouts instead
of relying on the hardcoded 120s / 300s defaults.

Validation rejects values of 0 or above 3600s, matching the pattern
used by MCP timeout validation.

Closes #3898
2026-03-18 17:07:03 -04:00
Argenis 1f7c3c99e4 feat(i18n): externalize tool descriptions for translation (#3912)
Add a locale-aware tool description system that loads translations from
TOML files in tool_descriptions/. This enables non-English users to see
tool descriptions in their language.

- Add src/i18n.rs module with ToolDescriptions loader, locale detection
  (ZEROCLAW_LOCALE, LANG, LC_ALL env vars), and English fallback chain
- Add locale config field to Config struct for explicit locale override
- Create tool_descriptions/en.toml with all 47 tool descriptions
- Create tool_descriptions/zh-CN.toml with Chinese translations
- Integrate with ToolsSection::build() and build_tool_instructions()
  to resolve descriptions from locale files before hardcoded fallback
- Add PromptContext.tool_descriptions field for prompt-time resolution
- Add AgentBuilder.tool_descriptions() setter for Agent construction
- Include tool_descriptions/ in Cargo.toml package include list
- Add 8 unit tests covering locale loading, fallback chains, env
  detection, and config override

Closes #3901
2026-03-18 17:01:39 -04:00
Argenis 92940a3d16 Merge pull request #3904 from zeroclaw-labs/fix/install-stale-build-cache
fix(install): clean stale build cache on upgrade
2026-03-18 15:49:10 -04:00
Argenis d77c616905 fix: reset tool call dedup cache each iteration to prevent loops (#3910)
The seen_tool_signatures HashSet was initialized outside the iteration loop, causing cross-iteration deduplication of legitimate tool calls. This triggered a self-correction spiral where the agent repeatedly attempted skipped calls until hitting max_iterations.

Moving the HashSet inside the loop ensures deduplication only applies within a single iteration, as originally intended.

Fixes #3798
2026-03-18 15:45:10 -04:00
Argenis ac12470c27 fix(channels): respect ack_reactions config for Telegram channel (#3834) (#3913)
The Telegram channel was ignoring the ack_reactions setting because it
sent setMessageReaction API calls directly in its polling loop, bypassing
the top-level channels_config.ack_reactions check.

- Add optional ack_reactions field to TelegramConfig so it can be set
  under [channels_config.telegram] without "unknown key" warnings
- Add ack_reactions field and with_ack_reactions() builder to
  TelegramChannel, defaulting to true
- Guard try_add_ack_reaction_nonblocking() behind self.ack_reactions
- Wire channel-level override with fallback to top-level default
- Add config deserialization and channel behavior tests
2026-03-18 15:40:31 -04:00
Nim G a322e01b5f feat(channel): add interrupt_on_new_message support for Mattermost 2026-03-18 16:40:25 -03:00
Argenis c5a1148ae9 fix: ensure install.sh creates config.toml and workspace files (#3852) (#3906)
When running install.sh with --docker --skip-build --prefer-prebuilt
(especially with podman via ZEROCLAW_CONTAINER_CLI), the script would
skip creating config.toml and workspace scaffold files because these
were only generated by the onboard wizard, which requires an interactive
terminal or explicit API key.

Add ensure_default_config_and_workspace() that creates a minimal
config.toml (with provider, workspace_dir, and optional api_key/model)
and seeds the workspace directory structure (sessions/, memory/, state/,
cron/, skills/ subdirectories plus IDENTITY.md, USER.md, MEMORY.md,
AGENTS.md, and SOUL.md) when they don't already exist.

This function is called:
- At the end of run_docker_bootstrap(), so config and workspace files
  exist on the host volume regardless of whether onboard ran inside the
  container.
- After the [3/3] Finalizing setup onboard block in the native install
  path, covering --skip-build, --prefer-prebuilt, --skip-onboard, and
  cases where the binary wasn't found.

The function is idempotent: it only writes files that don't already
exist, so it never overwrites config or workspace files created by a
successful onboard run.

Also makes the container onboard failure non-fatal (|| true) so that
the fallback config generation always runs.

Fixes #3852
2026-03-18 15:15:47 -04:00
Argenis 440ad6e5b5 fix: handle double-serialized schedule in cron_add and cron_update (#3860) (#3905)
When LLMs pass the schedule parameter as a JSON string instead of a JSON
object, serde fails with "invalid type: string, expected internally
tagged enum Schedule". Add a deserialize_maybe_stringified helper that
detects stringified JSON values and parses the inner string before
deserializing, providing backward compatibility for both object and
string representations.

Fixes #3860
2026-03-18 15:15:22 -04:00
Argenis 2e41cb56f6 fix: enable vision support for llamacpp provider (#3907)
The llamacpp provider was instantiated with vision disabled by default, causing image transfers from Telegram to fail. Use new_with_vision() with vision enabled, matching the behavior of other compatible providers.

Fixes #3802
2026-03-18 15:14:57 -04:00
Argenis 2227fadb66 fix(tools): include tool_search instruction in deferred tools system prompt (#3826) (#3914)
The deferred MCP tools section in the system prompt only listed tool
names inside <available-deferred-tools> tags without any instruction
telling the LLM to call tool_search to activate them. In daemon and
Telegram mode, where conversations are shorter and less guided, the
LLM never discovered it should call tool_search, so deferred tools
were effectively unavailable.

Add a "## Deferred Tools" heading with explicit instructions that
the LLM MUST call tool_search before using any listed tool. This
ensures the LLM knows to activate deferred tools in all modes
(CLI, daemon, Telegram) consistently.

Also add tests covering:
- Instruction presence in the deferred section
- Multiple-server deferred tool search
- Cross-server keyword search ranking
- Activation persistence across multiple tool_search calls
- Idempotent re-activation
2026-03-18 15:13:58 -04:00
Argenis 162efbb49c fix(providers): recover from context window errors by truncating history (#3908)
When a provider returns a context-size-exceeded error, truncate the
oldest non-system messages from conversation history and retry instead
of immediately bailing out. This enables local models with small
context windows (llamafile, llama.cpp) to work by automatically
fitting the conversation within available context.

Closes #3894
2026-03-18 14:54:56 -04:00
argenis de la rosa 3c8b6d219a fix(test): use PID-scoped script path to prevent ETXTBSY in CI
The echo_provider() test helper writes a fake_claude.sh script to
a shared temp directory. When lib and bin test binaries run in
parallel (separate processes, separate OnceLock statics), one
process can overwrite the script while the other is executing it,
causing "Text file busy" (ETXTBSY). Scope the filename with PID
to isolate each test process.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 14:33:04 -04:00
Vasanth 58b98c59a8 feat(agent): add runtime model switching via model_switch tool (#3853)
Add support for switching AI models at runtime during a conversation.
The model_switch tool allows users to:
- Get current model state
- List available providers
- List models for a provider
- Switch to a different model

The switch takes effect immediately for the current conversation by
recreating the provider with the new model after tool execution.

Risk: Medium - internal state changes and provider recreation
2026-03-18 14:17:52 -04:00
argenis de la rosa d72e9379f7 fix(install): clean stale build cache on upgrade
When upgrading an existing installation, stale build artifacts in
target/release/build/ can cause compilation failures (e.g.
libsqlite3-sys bindgen.rs not found). Run cargo clean --release
before building when an upgrade is detected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 14:15:59 -04:00
Alix-007 e3e9db5210 fix(openrouter): respect provider timeout config 2026-03-19 01:36:57 +08:00
Nim G ad8a209bd7 feat(channel): add /stop command to cancel in-flight tasks
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.

Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
  optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
  semaphore acquisition so the target task is never replaced in the store;
  fires CancellationToken on the running task; sends reply via tokio::spawn
  using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
  now enter the in_flight_by_sender store, enabling /stop to reach them;
  auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
  Slack only) — this is a deliberate broadening, not a side effect

Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
  semantics analysis; Slack always sets thread_ts, Matrix only for replies)

Supersedes #2855

Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
2026-03-18 12:32:20 -04:00
Nim G 571ccd67cb feat(channel): add /stop command to cancel in-flight tasks
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.

Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
  optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
  semaphore acquisition so the target task is never replaced in the store;
  fires CancellationToken on the running task; sends reply via tokio::spawn
  using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
  now enter the in_flight_by_sender store, enabling /stop to reach them;
  auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
  Slack only) — this is a deliberate broadening, not a side effect

Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
  semantics analysis; Slack always sets thread_ts, Matrix only for replies)

Supersedes #2855

Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
2026-03-18 13:29:40 -03:00
Argenis 959b933841 fix(providers): preserve conversation context in Claude Code CLI (#3885)
* fix(providers): preserve conversation context in Claude Code CLI provider

Override chat_with_history to format full multi-turn conversation
history into a single prompt for the claude CLI, instead of only
forwarding the last user message.

Closes #3878

* fix(providers): fix ETXTBSY race in claude_code tests

Use OnceLock to initialize the fake_claude.sh test script exactly
once, preventing "Text file busy" errors when parallel tests
concurrently write and execute the same script file.
2026-03-18 11:13:42 -04:00
Argenis caf7c7194f fix(cron): prevent one-shot jobs from re-executing indefinitely (#3886)
Handle Schedule::At jobs in reschedule_after_run by disabling them
instead of rescheduling to a past timestamp. Also add a fallback in
persist_job_result to disable one-shot jobs if removal fails.

Closes #3868
2026-03-18 11:03:44 -04:00
Argenis ee7d542da6 fix: pass route-specific api_key through channel provider creation (#3881)
When using Channel mode with dynamic classification and routing, the
route-specific `api_key` from `[[model_routes]]` was silently dropped.
The system always fell back to the global `api_key`, causing 401 errors
when routing to `custom:` providers that require distinct credentials.

Root cause: `ChannelRouteSelection` only stored provider + model, and
`get_or_create_provider` always used `ctx.api_key` (the global key).

Changes:
- Add `api_key` field to `ChannelRouteSelection` so the matched route's
  credential survives through to provider creation.
- Update `get_or_create_provider` to accept and prefer a route-specific
  `api_key` over the global key.
- Use a composite cache key (provider name + api_key hash) to prevent
  cache poisoning when multiple routes target the same provider with
  different credentials.
- Wire the route api_key through query classification matching and the
  `/model` (SetModel) command path.

Fixes #3838
2026-03-18 10:06:06 -04:00
Argenis d51ec4b43f fix(docker): remove COPY commands for dockerignored paths (#3880)
The Dockerfile and Dockerfile.debian COPY `firmware/`, `crates/robot-kit/`,
and `crates/robot-kit/Cargo.toml`, but `.dockerignore` excludes both
`firmware/` and `crates/robot-kit/`, causing COPY failures during build.

Since these are hardware-only paths not needed for the Docker runtime:
- Remove COPY commands for `firmware/` and `crates/robot-kit/`
- Remove dummy `crates/robot-kit/src` creation in dep-caching steps
- Use sed to strip `crates/robot-kit` from workspace members in the
  copied Cargo.toml so Cargo doesn't look for the missing manifest

Fixes #3836
2026-03-18 10:06:03 -04:00
Alix-007 d81eeefe52 fix(tools): normalize workspace-prefixed paths 2026-03-18 10:35:25 +08:00
Argenis 3d92b2a652 Merge pull request #3833 from zeroclaw-labs/fix/pairing-code-display
fix(web): display pairing code in dashboard
2026-03-17 22:16:50 -04:00
argenis de la rosa 3255051426 fix(web): display pairing code in dashboard instead of terminal-only
Fetch the current pairing code from GET /admin/paircode (localhost-only)
and display it in both the initial PairingDialog and the /pairing
management page. Users no longer need to check the terminal to find
the 6-digit code — it appears directly in the web UI.

Falls back gracefully when the admin endpoint is unreachable (e.g.
non-localhost access), showing the original "check your terminal" prompt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 22:01:03 -04:00
Argenis dcaf330848 Merge pull request #3828 from zeroclaw-labs/fix/readme
fix(readme): update social links across all locales
2026-03-17 19:15:29 -04:00
argenis de la rosa 7f8de5cb17 fix(readme): update Facebook group URL and add Discord, TikTok, RedNote badges
Update Facebook group link from /groups/zeroclaw to /groups/zeroclawlabs
across all 31 README locale files. Add Discord, TikTok, and RedNote
social badges to the badge section of all READMEs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 19:02:53 -04:00
Argenis 1341cfb296 Merge pull request #3827 from zeroclaw-labs/feat/plugin-wasm
feat(plugins): add WASM plugin system with Extism runtime
2026-03-17 18:51:41 -04:00
argenis de la rosa 9da620a5aa fix(ci): add cargo-audit ignore for wasmtime vulns from extism
cargo-audit uses .cargo/audit.toml (not deny.toml) for its ignore
list. These 3 wasmtime advisories are transitive via extism 1.13.0
with no upstream fix available. Plugin system is feature-gated.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:38:02 -04:00
argenis de la rosa d016e6b1a0 fix(ci): ignore wasmtime vulns from extism 1.13.0 (no upstream fix)
RUSTSEC-2026-0006, RUSTSEC-2026-0020, RUSTSEC-2026-0021 are all in
wasmtime 37.x pinned by extism. No newer extism release available.
Plugin system is behind a feature flag to limit exposure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:35:08 -04:00
argenis de la rosa 9b6360ad71 fix(ci): ignore unmaintained transitive deps from extism and indicatif
Add cargo-deny ignore entries for RUSTSEC-2024-0388 (derivative),
RUSTSEC-2025-0057 (fxhash), and RUSTSEC-2025-0119 (number_prefix).
All are transitive dependencies we cannot directly control.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:33:03 -04:00
argenis de la rosa dc50ca9171 fix(plugins): update lockfile and fix ws.rs formatting
Sync Cargo.lock with new Extism/WASM plugin dependencies and apply
rustfmt line-wrap fix in gateway WebSocket handler.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:30:41 -04:00
argenis de la rosa 67edd2bc60 fix(plugins): integrate WASM tools into registry, add gateway routes and tests
- Wire WASM plugin tools into all_tools_with_runtime() behind
  cfg(feature = "plugins-wasm"), discovering and registering tool-capable
  plugins from the configured plugins directory at startup.
- Add /api/plugins gateway endpoint (cfg-gated) for listing plugin status.
- Add mod plugins declaration to main.rs binary crate so crate::plugins
  resolves when the feature is enabled.
- Add unit tests for PluginHost: empty dir, manifest discovery, capability
  filtering, lookup, and removal.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:10:24 -04:00
argenis de la rosa dcf66175e4 feat(plugins): add example weather plugin and manifest
Add a standalone example plugin demonstrating the WASM plugin interface:
- example-plugin/Cargo.toml: cdylib crate targeting wasm32-wasip1
- example-plugin/src/lib.rs: mock weather tool using extism-pdk
- example-plugin/manifest.toml: plugin manifest declaring tool capability

This crate is intentionally NOT added to the workspace members since it
targets wasm32-wasip1 and would break the main build.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
argenis de la rosa b3bb79d805 feat(plugins): add PluginHost, WasmTool, and WasmChannel bridges
Implement the core plugin infrastructure:
- PluginHost: discovers plugins from the workspace plugins directory,
  loads manifest.toml files, supports install/remove/list/info operations
- WasmTool: bridges WASM plugins to the Tool trait (execute stub pending
  Extism runtime wiring)
- WasmChannel: bridges WASM plugins to the Channel trait (send/listen
  stubs pending Extism runtime wiring)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
argenis de la rosa c857b64bb4 feat(plugins): add Extism dependency, feature flag, and plugin module skeleton
Introduce the WASM plugin system foundation:
- Add extism 1.9 as an optional dependency behind `plugins-wasm` feature
- Create `src/plugins/` module with manifest types, error types, and stub host
- Add `Plugin` CLI subcommands (list, install, remove, info) behind cfg gate
- Add `PluginsConfig` to the config schema with sensible defaults

All plugin code is behind `#[cfg(feature = "plugins-wasm")]` so the default
build is unaffected.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 18:09:54 -04:00
Argenis c051f0323e Merge pull request #3822 from zeroclaw-labs/feat/pairing-dashboard
feat(gateway): add pairing dashboard with device management
2026-03-17 17:54:28 -04:00
Argenis dea5c67ab0 Merge pull request #3821 from zeroclaw-labs/feat/self-test-update
feat(cli): add self-test and update commands
2026-03-17 17:54:25 -04:00
Argenis a14afd7ef9 Merge pull request #3820 from zeroclaw-labs/feat/docker-fix
feat(docker): web-builder stage, healthcheck probe, resource limits
2026-03-17 17:54:22 -04:00
argenis de la rosa 4455b24056 fix(pairing): add SQLite persistence, fix config defaults, align with plan
- Add SQLite persistence to DeviceRegistry (backed by rusqlite)
- Rename config fields: ttl_secs -> code_ttl_secs, max_pending -> max_pending_codes, max_attempts -> max_failed_attempts
- Update defaults: code_length 6 -> 8, ttl_secs 300 -> 3600, max_pending 10 -> 3
- Add attempts tracking to PendingPairing struct
- Add token_hash() and authenticate_and_hash() to PairingGuard
- Fix route paths: /api/pairing/submit -> /api/pair, /api/devices/{id}/rotate -> /api/devices/{id}/token/rotate
- Add QR code placeholder to Pairing.tsx
- Pass workspace_dir to DeviceRegistry constructor

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:44:55 -04:00
argenis de la rosa 8ec6522759 fix(gateway): add new fields to test AppState and GatewayConfig constructors
Add device_registry, pending_pairings to test AppState instances and
pairing_dashboard to test GatewayConfig to fix compilation of tests
after the new pairing dashboard fields were introduced.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:36:01 -04:00
argenis de la rosa a818edb782 feat(web): add pairing dashboard page
Add Pairing page with device list table, pairing code generation,
and device revocation. Create useDevices hook for reusable device
fetching. Wire /pairing route into App.tsx router.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:35:39 -04:00
argenis de la rosa e0af3d98dd feat(gateway): extend WebSocket handshake with optional connect params
Add ConnectParams struct for an optional first-frame connect handshake.
If the first WebSocket message is {"type":"connect",...}, connection
parameters (session_id, device_name, capabilities) are extracted and
a "connected" ack is sent back. Old clients sending "message" first
still work unchanged (backward-compatible).

Extract process_chat_message() helper to avoid duplication between
fallback first-message handling and the main message loop.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:35:39 -04:00
argenis de la rosa 48bdbde26c feat(gateway): add device registry and pairing API handlers
Introduce DeviceRegistry, PairingStore, and five new API endpoints:
- POST /api/pairing/initiate — generate a new pairing code
- POST /api/pairing/submit — submit code with device metadata
- GET /api/devices — list paired devices
- DELETE /api/devices/{id} — revoke a paired device
- POST /api/devices/{id}/rotate — rotate a device token

Wire into AppState and gateway router. Registry is only created
when require_pairing is enabled.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:34:56 -04:00
argenis de la rosa dc495a105f feat(config): add PairingDashboardConfig to gateway schema
Add PairingDashboardConfig struct with configurable code_length,
ttl_secs, max_pending, max_attempts, and lockout_secs fields.
Nested under GatewayConfig as `pairing_dashboard` with serde defaults.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:32:27 -04:00
argenis de la rosa fe9addcfe0 fix(cli): align self-test and update commands with implementation plan
- Export commands module from lib.rs (pub mod commands) for external consumers
- Add --force and --version flags to the Update CLI command
- Wire version parameter through to check() and run() in update.rs,
  supporting targeted version fetches via GitHub releases/tags API
- Add WebSocket handshake check (check_websocket_handshake) to the full
  self-test suite in self_test.rs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:59 -04:00
argenis de la rosa 5bfa5f18e1 feat(cli): add update command with 6-phase pipeline and rollback
Add `zeroclaw update` command with a 6-phase self-update pipeline:
1. Preflight — check GitHub releases API for newer version
2. Download — fetch platform-specific binary to temp dir
3. Backup — copy current binary to .bak for rollback
4. Validate — size check + --version smoke test on download
5. Swap — overwrite current binary with new version
6. Smoke test — verify updated binary runs, rollback on failure

Supports --check flag for update-check-only mode without installing.
Includes version comparison logic with unit tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:58 -04:00
argenis de la rosa 72b7e1e647 feat(cli): add self-test command with quick and full modes
Add `zeroclaw self-test` command with two modes:
- Quick mode (--quick): 8 offline checks including config, workspace,
  SQLite, provider/tool/channel registries, security policy, and version
- Full mode (default): adds gateway health and memory round-trip checks

Creates src/commands/ module structure with self_test and update stubs.
Adds indicatif and tempfile runtime dependencies for the update pipeline.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 17:24:58 -04:00
argenis de la rosa 413c94befe chore(docker): tighten compose resource limits
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
argenis de la rosa 5aa6026fa1 feat(cli): add status --format=exit-code for Docker healthcheck
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
argenis de la rosa 6eca841bd7 feat(docker): add web-builder stage and update .dockerignore
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 16:02:15 -04:00
Argenis 50e8d4f5f8 fix(ci): use pre-built binaries for Debian Docker image (#3814)
The Debian compatibility image was building from source with QEMU
cross-compilation for ARM64, which is extremely slow and was getting
cancelled by the concurrency group. Switch to using pre-built binaries
(same as the distroless image) with a debian:bookworm-slim runtime base.

- Add Dockerfile.debian.ci (mirrors Dockerfile.ci with Debian runtime)
- Update release-beta-on-push.yml to use docker-ctx + pre-built bins
- Update release-stable-manual.yml with same fix
- Drop GHA cache layers (no longer building from source)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 15:21:15 -04:00
Argenis fc2aac7c94 feat(gateway): persist WS chat sessions across restarts (#3813)
Gateway WebSocket chat sessions were in-memory only — conversation
history was lost on gateway restart, macOS sleep/wake, or client
reconnect. This wires up the existing SessionBackend (SQLite) to
the gateway WS handler so sessions survive restarts and reconnections.

Changes:
- Add delete_session() to SessionBackend trait + SQLite implementation
- Add session_persistence and session_ttl_hours to GatewayConfig
- Add Agent::seed_history() to hydrate agent from persisted messages
- Initialize SqliteSessionBackend in run_gateway() when enabled
- Send session_start message on WS connect with session_id + resumed
- Persist user/assistant messages after each turn
- Add GET /api/sessions and DELETE /api/sessions/{id} REST endpoints
- Bump version to 0.5.0

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 14:26:39 -04:00
Argenis 4caa3f7e6f fix(web): remove duplicate dashboard keys in Turkish locale (#3812)
The Turkish (tr) locale section had a duplicate "Dashboard specific
labels" block that repeated 19 keys already defined earlier, causing
TypeScript error TS1117. Moved the unique keys (provider_model,
paired_yes, etc.) into the primary dashboard section and removed
the duplicate block.

Fixes build failure introduced by #3777.
2026-03-17 14:13:46 -04:00
Argenis 3bc6ec3cf5 fix: only tweet for stable releases, not beta builds (#3808)
Remove tweet job from beta workflow. Update tweet-release.yml to diff
against previous stable tag (excluding betas) to capture all features
across the full release cycle. Simplify tweet format to feature-focused
style without contributor counts.

Supersedes #3575.
2026-03-17 14:06:46 -04:00
Argenis f3fbd1b094 fix(web): preserve provider runtime options in ws agent (#3807)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 14:06:22 -04:00
Yingpeng MA 79e8252d7a feat(web/i18n): add full Chinese locale and complete Turkish translations (#3777)
- Add comprehensive Simplified Chinese (zh) translations for all UI strings
- Extend and complete Turkish (tr) translations
- Fill in missing English (en) translation keys
- Reset default locale to 'en'
- Update language toggle to cycle through all three locales: en → zh → tr
2026-03-17 13:40:25 -04:00
Marijan Petričević 924521c927 config/schema: add serde default to AutonomyConfig (#3691)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 13:40:18 -04:00
Argenis 07ca270f03 fix(security): restore tokens.is_empty() guard, add re-pairing hint (#3738)
Revert "always generate pairing code" to tighter security posture:
codes are only generated on first startup when no tokens exist. Add
a CLI hint to the gateway banner so operators know how to re-pair
on demand. Fix install.sh to not use --new on fresh install (avoids
invalidating the auto-generated code). Fix onboard to show an
informational message instead of a throwaway PairingGuard.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 13:40:02 -04:00
Alix-007 e08091a2e2 fix(install): print PATH guidance after cargo install (#3769)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:53 -04:00
Alix-007 1f1123d071 fix(channels): allow low-risk shell in non-interactive mode (#3771)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:37 -04:00
Alix-007 d5bc46238a fix(install): skip prebuilt flow on musl (#3788)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:29 -04:00
Alix-007 843973762a ci(docker): publish debian compatibility image (#3789)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:20 -04:00
Alix-007 5f8d7d7347 fix(daemon): preserve deferred MCP tools in /api/chat (#3790)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:12 -04:00
Alix-007 7b3bea8d01 fix(agent): resolve deferred MCP tools by suffix (#3793)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:39:03 -04:00
Alix-007 ac461dc704 fix(docker): align debian image glibc baseline (#3794)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:55 -04:00
Alix-007 f04e56d9a1 feat(skills): support YAML frontmatter in SKILL.md (#3797)
* feat(skills): support YAML frontmatter in SKILL.md

* fix(skills): preserve nested open-skill names

---------

Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:49 -04:00
Alix-007 1d6f482b04 fix(build): rerun embedded web assets when dist changes (#3799)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:40 -04:00
Alix-007 ba6d0a4df9 fix(release): include matrix channel in official builds (#3800)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 13:38:33 -04:00
Alix-007 3cf873ab85 fix(groq): fall back on tool validation 400s (#3778)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:23:39 -04:00
Argenis 025724913d feat(runtime): add configurable reasoning effort (#3785)
* feat(runtime): add configurable reasoning effort

* fix(test): add missing reasoning_effort field in live test

Add reasoning_effort: None to ProviderRuntimeOptions construction in
openai_codex_vision_e2e.rs to fix E0063 compile error.

---------

Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 09:21:53 -04:00
project516 49dd4cd9da Change AppImage to tar.gz in arduino-uno-q-setup.md (#3754)
Arduino App Lab is a tar.gz file for Linux, not an AppImage

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:19:38 -04:00
dependabot[bot] 0664a5e854 chore(deps): bump rust from 7d37016 to da9dab7 (#3776)
Bumps rust from `7d37016` to `da9dab7`.

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-slim
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:16:21 -04:00
Argenis acd09fbd86 feat(ci): use pre-built binaries for Docker images (#3784)
Instead of compiling Rust from source inside Docker (~60 min),
download the already-built linux binaries from the build matrix
and copy them into a minimal distroless image (~2 min).

- Add Dockerfile.ci for release workflows (no Rust toolchain needed)
- Update both beta and stable workflows to use pre-built artifacts
- Drop Docker job timeout from 60 to 15 minutes
- Original Dockerfile unchanged for local dev builds
2026-03-17 09:03:13 -04:00
Alix-007 0f7d1fceeb fix(channels): hide tool-call notifications by default (#3779)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 08:52:49 -04:00
GhostC 01e13ac92d fix(skills): allow sibling markdown links within skills root (#3781)
Made-with: Cursor
2026-03-17 08:31:20 -04:00
Argenis a9a6113093 fix(docs): revert unauthorized CLAUDE.md additions from #3604 (#3761)
PR #3604 included CLAUDE.md changes referencing non-existent modules
(src/security/taint.rs, src/sop/workflow.rs) and duplicating content
from CONTRIBUTING.md. These additions violate the anti-pattern rule
against modifying CLAUDE.md in feature PRs.
2026-03-17 01:56:51 -04:00
Giulio V 906951a587 feat(multi): LinkedIn tool, WhatsApp voice notes, and Anthropic OAuth fix (#3604)
* feat(tools): add native LinkedIn integration tool

Add a config-gated LinkedIn tool that enables ZeroClaw to interact with
LinkedIn's REST API via OAuth2. Supports creating posts, listing own
posts, commenting, reacting, deleting posts, viewing engagement stats,
and retrieving profile info.

Architecture:
- linkedin.rs: Tool trait impl with action-dispatched design
- linkedin_client.rs: OAuth2 token management and API wrappers
- Config-gated via [linkedin] enabled = false (default off)
- Credentials loaded from workspace .env file
- Automatic token refresh with line-targeted .env update

39 unit tests covering security enforcement, parameter validation,
credential parsing, and token management.

* feat(linkedin): configurable content strategy and API version

- Expand LinkedInConfig with api_version and nested LinkedInContentConfig
  (rss_feeds, github_users, github_repos, topics, persona, instructions)
- Add get_content_strategy tool action so agents can read config at runtime
- Fix hardcoded LinkedIn API version 202402 (expired) → configurable,
  defaulting to 202602
- LinkedInClient accepts api_version as parameter instead of static header
- 4 new tests (43 total), all passing

* feat(linkedin): add multi-provider image generation for posts

Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux)
and SVG fallback card. LinkedIn tool create_post now supports generate_image
parameter. Includes LinkedIn image upload (register → upload → reference),
configurable provider priority, and 14 new tests.

* feat(whatsapp): add voice note transcription and TTS voice replies

- Add STT support: download incoming voice notes via wa-rs, transcribe
  with OpenAI Whisper (or Groq), send transcribed text to agent
- Add TTS support: synthesize agent replies to Opus audio via OpenAI
  TTS, upload encrypted media, send as WhatsApp voice note (ptt=true)
- Voice replies only trigger when user sends a voice note; text
  messages get text replies only. Flag is consumed after one use to
  prevent multiple voice notes per agent turn
- Fix transcription module to support OpenAI API key (not just Groq):
  auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN /
  OPENAI_API_KEY / GROQ_API_KEY env vars in priority order
- Add optional api_key field to TranscriptionConfig for explicit key
- Add response_format: opus to OpenAI TTS for WhatsApp compatibility
- Add channel capability note so agent knows TTS is automatic
- Wire transcription + TTS config into WhatsApp Web channel builder

* fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key

When the Anthropic provider is used alongside a non-Anthropic primary
provider (e.g. custom: gateway), the global api_key would be passed
as credential override, bypassing provider-specific env vars. This
caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored
in favor of the unrelated gateway JWT.

Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and
ANTHROPIC_API_KEY env vars before falling back to the credential
override. This mirrors the existing MiniMax OAuth pattern and enables
subscription-based auth to work as a fallback provider.

* feat(linkedin): add scheduled post support via LinkedIn API

Add scheduled_at parameter to create_post and create_post_with_image.
When provided (RFC 3339 timestamp), the post is created as a DRAFT
with scheduledPublishOptions so LinkedIn publishes it automatically
at the specified time. This enables the cron job to schedule a week
of posts in advance directly on LinkedIn.

* fix(providers): prefer env vars for openai and groq credential resolution

Generalize the Anthropic OAuth fix to also cover openai and groq
providers. When used alongside a non-matching primary provider (e.g.
a custom: gateway), the global api_key would be passed as credential
override, causing auth failures. Now checks provider-specific env
vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the
credential override.

* fix(whatsapp): debounce voice replies to voice final answer only

The voice note TTS was triggering on the first send() call, which was
often intermediate tool output (URLs, JSON, web fetch results) rather
than the actual answer. This produced incomprehensible voice notes.

Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code)
in a pending_voice map. A spawned debounce task waits 4 seconds after
the last substantive message, then synthesizes and sends ONE voice
note with the final answer. Intermediate tool outputs are skipped.

This ensures the user hears the actual answer in the correct language,
not raw tool output in English.

* fix(whatsapp): voice in = voice out, text in = text out

Rewrite voice reply logic with clean separation:
- Voice note received: ALL text output suppressed. Latest message
  accumulated silently. After 5s of no new messages, ONE voice note
  sent with the final answer. No tool outputs, no text, just voice.
- Text received: normal text reply, no voice.

Atomic debounce: multiple spawned tasks race but only one can extract
the pending message (remove-inside-lock pattern). Prevents duplicate
voice notes.

* fix(whatsapp): voice replies send both text and voice note

Voice note in → text replies sent normally in real-time PLUS one
voice note with the final answer after 10s debounce. Only substantive
natural-language messages are voiced (tool outputs, URLs, JSON, code
blocks filtered out). Longer debounce (10s) ensures the agent
completes its full tool chain before the voice note fires.

Text in → text out only, no voice.

* fix(channels): suppress tool narration and ack reactions

- Add system prompt instruction telling the agent to NEVER narrate
  tool usage (no "Let me fetch..." or "I will use http_request...")
- Disable ack_reactions (emoji reactions on incoming messages)
- Users see only the final answer, no intermediate steps

* docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md

Add PR template requirements, code naming conventions, architecture
boundary rules, validation commands, and branch naming guidance
directly to CLAUDE.md for AI assistant reference.

* fix(docs): add blank lines around headings in CLAUDE.md for markdown lint

* fix(channels): strengthen tool narration suppression and fix large_futures

- Move anti-narration instruction to top of channel system prompt
- Add emphatic instruction for WhatsApp/voice channels specifically
- Add outbound message filter to strip tool-call-like patterns (, 🔧)
- Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
2026-03-17 01:55:05 -04:00
Giulio V 220745e217 feat(channels): add Reddit, Bluesky, and generic Webhook adapters (#3598)
* feat(channels): add Reddit, Bluesky, and generic Webhook adapters

- Reddit: OAuth2 polling for mentions/DMs/replies, comment and DM sending
- Bluesky: AT Protocol session auth, notification polling, post replies
- Webhook: Axum HTTP server for inbound, configurable outbound POST/PUT
- All three follow existing channel patterns with tests

* fix(channels): use neutral test fixtures and improve test naming in webhook
2026-03-17 01:26:58 -04:00
Giulio V 61de3d5648 feat(knowledge): add knowledge graph for expertise capture and reuse (#3596)
* feat(knowledge): add knowledge graph for expertise capture and reuse

SQLite-backed knowledge graph system for consulting firms to capture,
organize, and reuse architecture decisions, solution patterns, lessons
learned, and expert matching across client engagements.

- KnowledgeGraph (src/memory/knowledge_graph.rs): node CRUD, edge
  creation, FTS5 full-text search, tag filtering, subgraph traversal,
  expert ranking by authored contributions, graph statistics
- KnowledgeTool (src/tools/knowledge_tool.rs): Tool trait impl with
  capture, search, relate, suggest, expert_find, lessons_extract, and
  graph_stats actions
- KnowledgeConfig (src/config/schema.rs): disabled by default,
  configurable db_path/max_nodes, cross_workspace_search off by default
  for client data isolation
- Wired into tools factory (conditional on config.knowledge.enabled)

20 unit tests covering node CRUD, edge creation, search ranking,
subgraph queries, expert ranking, and tool actions.

* fix: address CodeRabbit review findings

- Fix UTF-8 truncation panic in truncate_str by using char-based
  iteration instead of byte indexing
- Add config validation for knowledge.max_nodes > 0
- Add subgraph depth boundary validation (must be > 0, capped at 100)

* fix(knowledge): address remaining CodeRabbit review issues

- MAJOR: Add db_path non-empty validation in Config::validate()
- MAJOR: Reject tags containing commas in add_node (comma is separator)
- MAJOR: Fix subgraph depth boundary (0..depth instead of 0..=depth)
- MAJOR: Apply project and node_type filters consistently in both
  tag-only and similarity search paths

* fix: correct subgraph traversal test assertion and sync CI workflows
2026-03-17 01:11:29 -04:00
Giulio V 675a5c9af0 feat(tools): add Google Workspace CLI (gws) integration (#3616)
* feat(tools): add Google Workspace CLI (gws) integration

Adds GoogleWorkspaceTool for interacting with Google Drive, Sheets,
Gmail, Calendar, Docs, and other Workspace services via CLI.

- Config-gated (google_workspace.enabled)
- Service allowlist for restricted access
- Requires shell access for CLI delegation
- Input validation against shell injection
- Wrong-type rejection for all optional parameters
- Config validation for allowed_services (empty, duplicate, malformed)
- Registered in integrations registry and CLI discovery

Closes #2986

* style: fix cargo fmt + clippy violations

* feat(google-workspace): expand config with auth, rate limits, and audit settings

* fix(tools): define missing GWS_TIMEOUT_SECS constant

* fix: Box::pin large futures and resolve duplicate Default impl

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-17 00:52:59 -04:00
Giulio V b099728c27 feat(stt): multi-provider STT with TranscriptionProvider trait (#3614)
* feat(stt): add multi-provider STT with TranscriptionProvider trait

Refactors single-endpoint transcription to support multiple providers:
Groq (existing), OpenAI Whisper, Deepgram, AssemblyAI, and Google Cloud
Speech-to-Text. Adds TranscriptionManager for provider routing with
backward-compatible config fields.

* style: fix cargo fmt + clippy violations

* fix: Box::pin large futures and resolve merge conflicts with master

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-17 00:33:41 -04:00
Alix-007 f87c7442b9 chore(pr): restore merge-base docs file for #3356 2026-03-17 12:11:35 +08:00
Alix-007 0a191fc02c chore(pr): drop unrelated docs delta from #3356 2026-03-17 12:08:20 +08:00
Argenis 1ca2092ca0 test(channel): add QQ markdown msg_type regression test (#3752)
Verify that QQ send body uses msg_type 2 with nested markdown object
instead of msg_type 0 with top-level content. Adapted from #3668.
2026-03-16 22:03:43 -04:00
Giulio V 5e3308eaaa feat(providers): add Claude Code, Gemini CLI, and KiloCLI subprocess providers (#3615)
* feat(providers): add Claude Code, Gemini CLI, and KiloCLI subprocess providers

Adds three new local subprocess-based providers for AI CLI tools.
Each provider spawns the CLI as a child process, communicates via
stdin/stdout pipes, and parses responses into ChatResponse format.

* fix: resolve clippy unnecessary_debug_formatting and rustfmt violations

* fix: resolve remaining clippy unnecessary_debug_formatting in CLI providers

* fix(providers): add AiAgent CLI category for subprocess providers
2026-03-16 21:51:05 -04:00
Chris Hengge ec255ad788 fix(tool): expand cron_add and cron_update parameter schemas (#3671)
The schedule field in cron_add used a bare {"type":"object"} with a
description string encoding a tagged union in pseudo-notation. The patch
field in cron_update was an opaque {"type":"object"} despite CronJobPatch
having nine fully-typed fields. Both gaps cause weaker instruction-following
models to produce malformed or missing nested JSON when invoking these tools.

Changes:
- cron_add: expand schedule into a oneOf discriminated union with explicit
  properties and required fields for each variant (cron/at/every), matching
  the Schedule enum in src/cron/types.rs exactly
- cron_add: add descriptions to all previously undocumented top-level fields
- cron_add: expand delivery from a bare inline comment to fully-specified
  properties with per-field descriptions
- cron_update: expand patch from opaque object to full properties matching
  CronJobPatch (name, enabled, command, prompt, model, session_target,
  delete_after_run, schedule, delivery)
- cron_update: schedule inside patch mirrors the same oneOf expansion
- Both: add inline NOTE comments flagging that oneOf is correct for
  OpenAI-compatible APIs but SchemaCleanr::clean_for_gemini must be
  applied if Gemini native tool calling is ever wired up
- Both: add schema-shape tests using the existing test_config/test_security
  helper pattern, covering oneOf variant structure, required fields, and
  delivery channel enum completeness

No behavior changes. No new dependencies. Backward compatible: the runtime
deserialization path (serde on Schedule/CronJobPatch) is unchanged.

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-16 21:45:49 -04:00
Sid Jain 7182f659ce fix(slack): honor mention_only in runtime channel wiring (#3715)
* feat(slack): wire mention_only group reply policy

* feat(slack): expose mention_only in config and wizard defaults
2026-03-16 21:40:47 -04:00
Ericsunsk ae7681209d fix(openai-codex): decode utf-8 safely across stream chunks (#3723) 2026-03-16 21:40:45 -04:00
Markus Bergholz ee3469e912 Fix: Support Nextcloud Talk Activity Streams 2.0 webhook format (#3737)
* fix

* fix

* format
2026-03-16 21:40:42 -04:00
Argenis fec81d8e75 ci: auto-sync Scoop and AUR on stable release (#3743)
Add workflow_call triggers to pub-scoop.yml and pub-aur.yml so the
stable release workflow can invoke them automatically after publish.

Wire scoop and aur jobs into release-stable-manual.yml as post-publish
steps (parallel with tweet), gated on publish success.

Update ci-map.md trigger docs to reflect auto-called behavior.
2026-03-16 21:34:29 -04:00
Ricardo Madriz 9a073fae1a fix(tools) Wire activated toolset into dispatch (#3747)
* fix(tools): wire ActivatedToolSet into tool dispatch and spec advertisement

When deferred MCP tools are activated via tool_search, they are stored
in ActivatedToolSet but never consulted by the tool call loop.
tool_specs is built once before the iteration loop and never refreshed,
so the provider API tools[] parameter never includes activated tools.
find_tool only searches the static registry, so execution dispatch also
fails silently.

Thread Arc<Mutex<ActivatedToolSet>> from creation sites through to
run_tool_call_loop. Rebuild tool_specs each iteration to merge base
registry specs with activated specs. Add fallback in execute_one_tool
to check the activated set when the static registry lookup misses.

Change ActivatedToolSet internal storage from Box<dyn Tool> to
Arc<dyn Tool> so we can clone the Arc out of the mutex guard before
awaiting tool.execute() (std::sync::MutexGuard is not Send).

* fix(tools): add activated_tools field to new ChannelRuntimeContext test site
2026-03-16 21:34:08 -04:00
Chris Hengge f0db63e53c fix(integrations): wire Cron and Browser status to config fields (#3750)
Both entries had hardcoded |_| IntegrationStatus::Available, ignoring
the live config entirely. Users with cron.enabled = true or
browser.enabled = true saw 'Available' on the /integrations dashboard
card instead of 'Active'.

Root cause: status_fn closures did not capture the Config argument.

Fix: replace the |_| stubs with |c| closures that check c.cron.enabled
and c.browser.enabled respectively, matching the pattern used by every
other wired entry in the registry (Telegram, Discord, Shell, etc.).

What did NOT change: ComingSoon entries, always-Active entries (Shell,
File System), platform entries, or any other registry logic.
2026-03-16 21:34:06 -04:00
Argenis df4dfeaf66 chore: bump version to 0.4.3 (#3749)
Update version across Cargo.toml, Cargo.lock, Scoop manifest,
and AUR PKGBUILD/.SRCINFO for the v0.4.3 stable release.
2026-03-16 21:23:04 -04:00
Giulio V e4ef25e913 feat(security): add Merkle hash-chain audit trail (#3601)
* feat(security): add Merkle hash-chain audit trail

Each audit entry now includes a SHA-256 hash linking it to the previous
entry (entry_hash, prev_hash, sequence), forming a tamper-evident chain.
Modifying any entry invalidates all subsequent hashes.

- Chain fields added to AuditEvent with #[serde(default)] for backward compat
- AuditLogger tracks chain state and recovers from existing logs on restart
- verify_chain() validates hash linkage, sequence continuity, and integrity
- Five new tests: genesis seed, multi-entry verify, tamper detection,
  sequence gap detection, and cross-restart chain recovery

* fix(security): replace personal name with neutral label in audit tests
2026-03-16 18:38:59 -04:00
Argenis c3a3cfc9a6 fix(agent): prevent duplicate tool schema injection in XML dispatcher (#3744)
Remove duplicate tool listing from XmlToolDispatcher::prompt_instructions()
since tool listing is already handled by ToolsSection in prompt.rs. The
method now only emits the XML protocol envelope.

Also fix UTF-8 char boundary panics in memory consolidation truncation by
using char_indices() instead of manual byte-boundary scanning.

Fixes #3643
Supersedes #3678

Co-authored-by: TJUEZ <TJUEZ@users.noreply.github.com>
2026-03-16 18:38:44 -04:00
伊姆 013fca6ad2 fix(config): support socks proxy scheme for Clash Verge (#3001)
Co-authored-by: imu <imu@sgcc.com.cn>
2026-03-16 18:38:03 -04:00
linyibin 23a0f25b44 fix(web): ensure web/dist exists in fresh clones (#3114)
The Rust build expects web/dist to exist (static assets). Track an empty
placeholder via web/dist/.gitkeep and adjust ignore rules to keep build
artifacts ignored while allowing the placeholder file.

Made-with: Cursor
2026-03-16 18:37:14 -04:00
Giulio V 2eaa8c45f4 feat(whatsapp-web): add voice message transcription support (#3617)
Adds audio message detection and transcription to WhatsApp Web channel.
Voice messages (PTT) are downloaded, transcribed via the existing
transcription subsystem (Groq Whisper), and delivered as text content.

- TranscriptionConfig field with builder pattern
- Duration limit enforcement before download
- MIME type mapping for audio formats
- Graceful error handling (skip on failure)
- Preserves full retry/reconnect state machine from master
2026-03-16 18:34:50 -04:00
Sandeep Ghael 85bf649432 fix(channel): resolve multi-room reply routing regression (#3224) (#3378)
* fix(channel): resolve multi-room reply routing regression (#3224)

PR #3224 (f0f0f808, "feat(matrix): add multi-room support") changed the
channel name format in matrix.rs from "matrix" to "matrix:!roomId", but
the channel lookup in mod.rs still does an exact match against
channels_by_name, which is keyed by Channel::name() (returns "matrix").

This mismatch causes target_channel to always resolve to None for Matrix
messages, silently dropping all replies.

Fix: fall back to a prefix match on the base channel name (before ':')
when the exact lookup fails. This preserves multi-room conversation
isolation while correctly routing replies to the originating channel.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: apply cargo fmt to channel routing fix

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Sandeep (Claude) <sghael+claude@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 18:32:56 -04:00
Giulio V 3ea99a7619 feat(tools): add browser delegation tool (#3610)
* feat(tools): add browser delegation tool for corporate web app interaction

Adds BrowserDelegateTool that delegates browser-based tasks to Claude Code
(or other browser-capable CLIs) for interacting with corporate tools
(Teams, Outlook, Jira, Confluence) via browser automation. Includes domain
validation (allow/blocklist), task templates, Chrome profile persistence
for SSO sessions, and timeout management.

* fix: resolve clippy violation in browser delegation tool

* fix(browser-delegate): validate URLs embedded in task text against domain policy

Scan the task text for http(s):// URLs using regex and validate each
against the allow/block domain lists before forwarding to the browser
CLI subprocess. This prevents bypassing domain restrictions by
embedding blocked URLs in the task parameter.

* fix(browser-delegate): constrain URL schemes, gate on runtime, document config

- Add has_shell_access gate so BrowserDelegateTool is only registered on
  shell-capable runtimes (skipped with warning on WASM/edge runtimes)
- Add boundary tests for javascript: and data: URL scheme rejection
- URL scheme validation (http/https only) and config docs were already
  addressed by a prior commit on this branch

* fix(tools): address CodeRabbit review findings for browser delegation

Remove dead `max_concurrent_tasks` config field and expand doc comments
on the `[browser_delegate]` config section in schema.rs.
2026-03-16 18:32:20 -04:00
Christian Pojoni 14f58c77c1 fix(tool+channel): revert invalid model set via model_routing_config (#3497)
When the LLM hallucinates an invalid model ID through the
model_routing_config tool's set_default action, the invalid model gets
persisted to config.toml. The channel hot-reload then picks it up and
every subsequent message fails with a non-retryable 404, permanently
killing the connection with no user recovery path.

Fix with two layers of defense:

1. Tool probe-and-rollback: after saving the new model, send a minimal
   chat request to verify the model is accessible. If the API returns a
   non-retryable error (404, auth failure, etc.), automatically restore
   the previous config and return a failure notice to the LLM.

2. Channel safety net: in maybe_apply_runtime_config_update, reject
   config reloads when warmup fails with a non-retryable error instead
   of applying the broken config anyway.

Co-authored-by: Christian Pojoni <christian.pojoni@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 18:30:36 -04:00
dependabot[bot] b833eb19f5 chore(deps): bump rust in the docker-all group (#3692)
Bumps the docker-all group with 1 update: rust.


Updates `rust` from 1.93-slim to 1.94-slim

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-slim
  dependency-type: direct:production
  dependency-group: docker-all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-16 18:28:51 -04:00
DotViegas 5db883b453 fix(providers): adjust temperature for OpenAI reasoning models (#2936)
Some OpenAI models (o1, o3, o4, gpt-5 variants) only accept temperature=1.0 and return errors with other values like 0.7. This change automatically adjusts the temperature parameter based on the model being used.

Changes:
- Add adjust_temperature_for_model() function to detect reasoning models
- Apply temperature adjustment in chat_with_system(), chat(), and chat_with_tools()
- Preserve user-specified temperature for standard models (gpt-4o, gpt-4-turbo, etc.)
- Force temperature=1.0 for reasoning models (o1, o3, o4, gpt-5, gpt-5-mini, gpt-5-nano, gpt-5.x-chat-latest)

Testing:
- Add 7 unit tests covering reasoning models, standard models, and edge cases
- All tests pass successfully
- Empirical testing documented in docs/openai-temperature-compatibility.md

Impact:
- Fixes temperature errors when using o1, o3, o4, and gpt-5 model families
- No breaking changes - transparent adjustment for end users
- Standard models continue to work with flexible temperature values

Risk: Low - isolated change within OpenAI provider, well-tested

Rollback: Revert this commit to restore previous behavior

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-16 18:28:01 -04:00
Eddie's AI Agent 0ae515b6b8 fix(channel): correct Matrix image marker casing to match canonical format (#3519)
Co-authored-by: Eddie Tong <xinhant@gmail.com>
2026-03-16 18:25:33 -04:00
Giulio V 2deb91455d feat(observability): add Hands dashboard metrics and events (#3595)
Add HandStarted, HandCompleted, and HandFailed event variants to
ObserverEvent, and HandRunDuration, HandFindingsCount, HandSuccessRate
metric variants to ObserverMetric. Update all observer backends (log,
noop, verbose, prometheus, otel) to handle the new variants with
appropriate instrumentation. Prometheus backend registers hand_runs
counter, hand_duration histogram, and hand_findings counter. OTel
backend creates spans and records metrics for hand runs.
2026-03-16 18:24:47 -04:00
smallwhite 595b81be41 fix(telegram): avoid duplicate finalize_draft messages (#3259) 2026-03-16 18:24:19 -04:00
Chris Hengge 4f9d817ddb fix(memory): serialize MemoryCategory as plain string and guard dashboard render crashes (#3051)
The /memory dashboard page rendered a black screen when MemoryCategory::Custom
was serialized by serde's derived impl as a tagged object {"custom":"..."} but
the frontend expected a plain string. No navigation was possible without using
the browser Back button.

Changes:
- src/memory/traits.rs: replace derived serde impls with custom serialize
  (delegates to Display, emits plain snake_case string) and deserialize
  (parses known variants by name, falls through to Custom(s) for unknown).
  Adds memory_category_serde_uses_snake_case and memory_category_custom_roundtrip
  tests. No persistent storage migration needed — all backends (SQLite, Markdown,
  Postgres) use their own category_to_str/str_to_category helpers and never
  read serde-serialized category values back from disk.
- web/src/App.tsx: export ErrorBoundary class so render crashes surface a
  recoverable UI instead of a black screen. Adds aria-live="polite" to the
  pairing error paragraph for screen reader accessibility.
- web/src/components/layout/Layout.tsx: wrap Outlet in ErrorBoundary keyed
  by pathname so the navigation shell stays mounted during a page crash and
  the boundary resets on route change.

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-16 18:23:03 -04:00
Darren.Zeng d13f5500e9 fix(install): add missing libssl-dev for Debian/Ubuntu (#3285)
The install.sh script was missing libssl-dev package for Debian/Ubuntu
systems. This caused compilation failures on Debian 12 and other
Debian-based distributions when building ZeroClaw from source.

The package is already included for other distributions:
- Alpine: openssl-dev
- Fedora/RHEL: openssl-devel
- Arch: openssl

This change adds libssl-dev to the apt-get install command to ensure
OpenSSL headers are available during compilation.

Fixes #2914

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-16 18:22:10 -04:00
Chris Hengge 1ccfe643ba fix(channel): bypass mention_only gate for Discord DMs (#2983)
When mention_only is enabled, the bot correctly requires an @mention in
guild (server) channels. However, Direct Messages have no guild_id and
are inherently private and addressed to the bot — requiring a @mention
in a DM is never correct and silently drops all DM messages.

Changes:
- src/channels/discord.rs: detect DMs via absence of guild_id in the
  gateway payload, compute effective_mention_only = self.mention_only && !is_dm,
  and pass that to normalize_incoming_content instead of self.mention_only.
  DMs bypass the mention gate; guild messages retain existing behaviour.
- Adds three tests: DM bypasses mention gate, guild message without mention
  is rejected, guild message with mention passes and strips the mention tag.

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-16 18:21:27 -04:00
Vadim Rutkovsky d4d3e03e34 fix: add dummy src/lib.rs in Dockerfile.debian for dep caching stage (#3553) 2026-03-16 18:20:43 -04:00
simonfr aa0f11b0a2 fix(docker): copy build.rs into builder stage to invalidate dummy binary cache (#3570)
* Fix build with Docker

* fix(docker): copy build.rs into builder stage to fix rag feature activation
2026-03-16 18:19:55 -04:00
Argenis 806f8b4020 fix(docker): purge stale zeroclawlabs fingerprints before build (#3741)
The BuildKit cache mount persists .fingerprint and .d files across
source tree changes, causing compilation errors when imports change.
Clear zeroclawlabs-specific artifacts before building to ensure
a clean recompile of the main crate while preserving dep caches.
2026-03-16 18:10:36 -04:00
Ericsunsk 83803cef5b fix(memory): filter autosave noise and scope recall/store by session (#3695)
* fix(memory): filter autosave noise and scope memory by session

* style: format rebase-resolved gateway and memory loader

* fix(tests): update memory loader mock for session-aware context

* fix(openai-codex): decode utf-8 safely across stream chunks
2026-03-16 16:36:35 -04:00
Vast-stars dcb182cdd5 fix(agent): remove bare URL → curl fallback in GLM-style tool call parser (#3694)
* fix(agent): remove bare URL → curl fallback in GLM-style tool call parser

The `parse_glm_style_tool_calls` function had a "Plain URL" fallback
that converted any bare URL line (e.g. `https://example.com`) into a
`shell` tool call running `curl -s '<url>'`. This caused:

- False positives: normal URLs in LLM replies misinterpreted as tool calls
- Swallowed replies: text with URLs not forwarded to the channel
- Unintended shell commands: `curl` executed without user intent

Explicit GLM-format tool calls like `browser_open/url>https://...` and
`shell/command>...` are unaffected — only the bare URL catch-all is
removed.

* style: cargo fmt

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-16 16:36:27 -04:00
Argenis 7c36a403b0 chore: sync Scoop and AUR templates to v0.4.1 (#3736) 2026-03-16 16:36:26 -04:00
Argenis 058dbc8786 feat(channels): add X/Twitter and Mochat channel integrations (#3735)
* feat(channels): add X/Twitter and Mochat channel integrations

Add two new channel implementations to close competitive gaps:

- X/Twitter: Twitter API v2 with mentions polling, tweet threading
  (auto-splits at 280 chars), DM support, and rate limit handling
- Mochat: HTTP polling-based integration with Mochat customer service
  platform, configurable poll interval, message dedup

Both channels follow the existing Channel trait pattern with full
config schema integration, health checks, and dedup.

Closes competitive gap: NanoClaw had X/Twitter, Nanobot had Mochat.

* fix(channels): use write! instead of format_push_string for clippy

Replace url.push_str(&format!(...)) with write!(url, ...) to satisfy
clippy::format_push_string lint on CI.

* fix(channels): rename reply_to parameter to avoid legacy field grep

The component test source_does_not_use_legacy_reply_to_field greps
for "reply_to:" in source files. Rename the parameter to
reply_tweet_id to pass this check.
2026-03-16 16:35:21 -04:00
SimianAstronaut7 aff7a19494 Merge pull request #3641 from zeroclaw-labs/work-issues/3011-fix-dashboard-ws-protocols
fix(gateway): pass bearer token in WebSocket subprotocol for dashboard auth
2026-03-16 16:34:46 -04:00
SimianAstronaut7 0894429b54 Merge pull request #3640 from zeroclaw-labs/work-issues/2881-transcription-initial-prompt
feat(config): support initial_prompt in transcription config
2026-03-16 16:34:43 -04:00
SimianAstronaut7 6b03e885fc Merge pull request #3639 from zeroclaw-labs/work-issues/3474-docker-restart-docs
docs(setup): add Docker/Podman stop/restart instructions
2026-03-16 16:34:41 -04:00
Argenis 84470a2dd2 fix(agent): strip vision markers from history for non-vision providers (#3734)
* fix(agent): strip vision markers from history for non-vision providers

When a user sends an image via Telegram to a non-vision provider, the
`[IMAGE:/path]` marker gets stored in the JSONL session file. Previously,
the rollback only removed it from in-memory history, not from the JSONL
file. On restart, the marker was reloaded and permanently broke the
conversation.

Two fixes:
1. `rollback_orphan_user_turn` now also calls `remove_last` on the
   session store so the poisoned entry is removed from disk.
2. When building history for a non-vision provider, `[IMAGE:]` markers
   are stripped from older history messages (and empty turns are dropped).

Fixes #3674

* fix(agent): only strip vision markers from older history, not current message

The initial fix stripped [IMAGE:] markers from all prior_turns including
the current message, which caused the vision check to never fire. Now
only strip from turns before the last one (the current request), so
fresh image sends still get a proper vision capability error.
2026-03-16 16:25:45 -04:00
Argenis 8a890be021 chore: bump version to 0.4.2 (#3733)
Update version across Cargo.toml, Cargo.lock, Scoop manifest,
and AUR PKGBUILD/.SRCINFO for the v0.4.2 stable release.
2026-03-16 16:02:34 -04:00
Argenis 74a5ff78e7 fix(qq): send markdown messages instead of plain text (#3732)
* fix(ci): decouple tweet from Docker push in release workflows

Remove Docker from the tweet job's dependency chain in both beta and
stable release workflows. Docker multi-platform builds are slow and
can be cancelled by concurrency groups, which was blocking the tweet
from ever firing. The tweet announces the GitHub Release, not the
Docker image.

* fix(qq): send markdown messages instead of plain text

Change msg_type from 0 (plain text) to 2 (markdown) and wrap content
in a markdown object per QQ's API documentation. This ensures markdown
formatting (bold, italic, code blocks, etc.) renders properly in QQ
clients instead of displaying raw syntax.

Fixes #3647
2026-03-16 15:56:09 -04:00
Argenis 93b16dece5 Merge pull request #3731 from zeroclaw-labs/fix/tweet-decouple-docker
fix(ci): decouple tweet from Docker push in release workflows
2026-03-16 15:52:05 -04:00
Argenis c773170753 feat(providers): close AiHubMix, SiliconFlow, and Codex OAuth provider gaps (#3730)
Add env var resolution for AiHubMix (AIHUBMIX_API_KEY) and SiliconFlow
(SILICONFLOW_API_KEY) so users can authenticate via environment variables.

Add factory and credential resolution tests for AiHubMix, SiliconFlow,
and Codex OAuth to ensure all provider aliases work correctly.
2026-03-16 15:48:27 -04:00
argenis de la rosa f210b43977 fix(ci): decouple tweet from Docker push in release workflows
Remove Docker from the tweet job's dependency chain in both beta and
stable release workflows. Docker multi-platform builds are slow and
can be cancelled by concurrency groups, which was blocking the tweet
from ever firing. The tweet announces the GitHub Release, not the
Docker image.
2026-03-16 15:32:29 -04:00
Argenis 50bc360bf4 Merge pull request #3729 from zeroclaw-labs/fix/crates-auto-publish-idempotent
fix(ci): make crates.io publish idempotent across all workflows
2026-03-16 15:30:30 -04:00
Argenis fc8ed583a0 feat(providers): add VOLCENGINE_API_KEY env var for VolcEngine/ByteDance gateway (#3725) 2026-03-16 15:29:36 -04:00
argenis de la rosa d593b6b1e4 fix(ci): make crates.io publish idempotent across all workflows
Both publish-crates-auto.yml and publish-crates.yml now treat
"already exists" from cargo publish as success instead of failing
the workflow. This prevents false failures when the auto-sync and
stable release workflows race or when re-running a publish.
2026-03-16 15:11:29 -04:00
Argenis 426faa3923 Merge pull request #3724 from zeroclaw-labs/fix/release-sync-tweet-and-crates
fix(ci): ensure tweet posts for stable releases and fix beta concurrency
2026-03-16 15:07:58 -04:00
argenis de la rosa 85429b3657 fix(ci): ensure tweet posts for stable releases and fix beta concurrency
- Tweet workflow: stable releases always tweet (no feature-check gate);
  beta tweets now also trigger on fix() commits, not just feat()
- Beta release: use cancel-in-progress to avoid queued runs getting
  stuck when rapid merges hit the concurrency group
2026-03-16 14:54:32 -04:00
Argenis 8adf05f307 Merge pull request #3722 from zeroclaw-labs/ci/restore-package-manager-syncs
ci: restore Homebrew and add Scoop/AUR package manager workflows
2026-03-16 14:30:09 -04:00
Argenis a5f844d7cc fix(daemon): ignore SIGHUP to survive terminal/SSH disconnect (#3721)
* fix(daemon): ignore SIGHUP to survive terminal/SSH disconnect (#3688)

* style: remove redundant continue flagged by clippy
2026-03-16 14:21:59 -04:00
Argenis 7a9e815948 fix(config): add serde default for cli field in ChannelsConfig (#3720)
* fix(config): add serde default for cli field in ChannelsConfig (#3710)

* style: fix rustfmt formatting in test
2026-03-16 14:21:51 -04:00
Argenis 46378cf8b4 fix(security): validate command before rate-limiting in cron once (#3699) (#3719) 2026-03-16 14:21:45 -04:00
Argenis c2133e6e62 fix(docker): prevent dummy binary from being shipped in container (#3687) (#3718) 2026-03-16 14:21:37 -04:00
Argenis e9b3148e73 Merge pull request #3717 from zeroclaw-labs/chore/version-bump-0.4.1
chore: bump version to 0.4.1
2026-03-16 14:21:20 -04:00
argenis de la rosa 3d007f6b55 docs: add Scoop and AUR workflows to CI map and release process 2026-03-16 14:17:18 -04:00
argenis de la rosa f349de78ed ci(aur): add AUR PKGBUILD template and publishing workflow
Adds Arch Linux distribution via AUR:
- dist/aur/PKGBUILD: package build template with cargo dist profile
- dist/aur/.SRCINFO: AUR metadata
- .github/workflows/pub-aur.yml: manual workflow to push to AUR
2026-03-16 14:16:30 -04:00
argenis de la rosa cd40051f4c ci(scoop): add Scoop manifest template and publishing workflow
Adds Windows package distribution via Scoop:
- dist/scoop/zeroclaw.json: manifest template with checkver/autoupdate
- .github/workflows/pub-scoop.yml: manual workflow to update Scoop bucket
2026-03-16 14:15:29 -04:00
argenis de la rosa 6e4b1ede28 ci(homebrew): restore Homebrew core formula publishing workflow
Re-adds the manual-dispatch workflow for bumping the zeroclaw formula
in Homebrew/homebrew-core via a bot-owned fork. Improved from the
previously removed version with safer env-var handling.

Requires secrets: HOMEBREW_CORE_BOT_TOKEN or HOMEBREW_UPSTREAM_PR_TOKEN
Requires variables: HOMEBREW_CORE_BOT_FORK_REPO, HOMEBREW_CORE_BOT_EMAIL
2026-03-16 14:12:48 -04:00
argenis de la rosa cfba009833 chore: regenerate Cargo.lock for v0.4.1 2026-03-16 13:49:39 -04:00
argenis de la rosa 45abd27e4a chore: bump version to 0.4.1
Reflects the addition of heartbeat metrics, SQLite session backend,
and two-tier response cache in this release cycle.
2026-03-16 13:49:36 -04:00
Argenis bb99d2b57a Merge branch 'master' into fix-2400-block-config-self-mutation 2026-03-16 09:21:57 -04:00
李龙 0668001470 81256dbf42 test(config): fix helper lint and swarms fixture 2026-03-16 17:56:30 +08:00
李龙 0668001470 eb9b26cea0 test(config): centralize backward-compat fixtures 2026-03-16 17:45:20 +08:00
李龙 0668001470 6211824f01 fix(security): block runtime config state edits 2026-03-16 17:21:22 +08:00
李龙 0668001470 b4decb40c6 Merge upstream/master into fix/config-load-initialized-state 2026-03-16 13:18:30 +08:00
李龙 0668001470 2b30f060fe test(config): move initialized log regression away from merge hotspot 2026-03-16 11:06:18 +08:00
李龙 0668001470 f994979380 fix(config): avoid clippy used_underscore_binding 2026-03-16 09:13:18 +08:00
simianastronaut 2539bcafe0 fix(gateway): pass bearer token in WebSocket subprotocol for dashboard auth
The dashboard WebSocket client was only sending ['zeroclaw.v1'] as the
protocols parameter, omitting the bearer token subprotocol. When
require_pairing = true, the server extracts the token from
Sec-WebSocket-Protocol as a fallback (browsers cannot set custom
headers on WebSocket connections). Without the bearer.<token> entry
in the protocols array, subprotocol-based authentication always failed.

Include `bearer.<token>` in the protocols array when a token is
available, matching the server's extract_ws_token() expectation.

Closes #3011

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 16:37:41 -04:00
simianastronaut 37d76f7c42 feat(config): support initial_prompt in transcription config for proper noun recognition
Add `initial_prompt: Option<String>` to `TranscriptionConfig` and pass
it as the `prompt` field in the Whisper API multipart POST when present.
This lets users bias transcription toward expected vocabulary (proper
nouns, technical terms) via the config file.

Closes #2881

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 16:31:55 -04:00
simianastronaut 41b46f23e3 docs(setup): add Docker/Podman stop/restart instructions
Users who installed via `./install.sh --docker` had no documented way to
restart the container after stopping it. Add clear lifecycle instructions
(stop, start, restart, logs, health check) to both the bootstrap guide and
the operations runbook, covering docker-compose, manual `docker run`, and
Podman-specific flags.

Closes #3474

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 16:26:28 -04:00
Alix-007 04ea5093d4 fix(config): log existing config as initialized 2026-03-13 02:41:39 +08:00
206 changed files with 31071 additions and 1490 deletions
+10
View File
@@ -0,0 +1,10 @@
# cargo-audit configuration
# https://rustsec.org/
[advisories]
ignore = [
# wasmtime vulns via extism 1.13.0 — no upstream fix; plugins feature-gated
"RUSTSEC-2026-0006", # wasmtime f64.copysign segfault on x86-64
"RUSTSEC-2026-0020", # WASI guest-controlled resource exhaustion
"RUSTSEC-2026-0021", # WASI http fields panic
]
+9
View File
@@ -64,3 +64,12 @@ LICENSE
*.profdata
coverage
lcov.info
# Firmware and hardware crates (not needed for Docker runtime)
firmware/
crates/robot-kit/
# Application and script directories (not needed for Docker runtime)
apps/
python/
scripts/
+60
View File
@@ -1 +1,61 @@
# Git attributes for ZeroClaw
# https://git-scm.com/docs/gitattributes
# Auto detect text files and perform LF normalization
* text=auto
# Source code
*.rs text eol=lf linguist-language=Rust
*.toml text eol=lf linguist-language=TOML
*.py text eol=lf linguist-language=Python
*.js text eol=lf linguist-language=JavaScript
*.ts text eol=lf linguist-language=TypeScript
*.html text eol=lf linguist-language=HTML
*.css text eol=lf linguist-language=CSS
*.scss text eol=lf linguist-language=SCSS
*.json text eol=lf linguist-language=JSON
*.yaml text eol=lf linguist-language=YAML
*.yml text eol=lf linguist-language=YAML
*.md text eol=lf linguist-language=Markdown
*.sh text eol=lf linguist-language=Shell
*.bash text eol=lf linguist-language=Shell
*.ps1 text eol=crlf linguist-language=PowerShell
# Documentation
*.txt text eol=lf
LICENSE* text eol=lf
# Configuration files
.editorconfig text eol=lf
.gitattributes text eol=lf
.gitignore text eol=lf
.dockerignore text eol=lf
# Rust-specific
Cargo.lock text eol=lf linguist-generated
Cargo.toml text eol=lf
# Declare files that will always have CRLF line endings on checkout
*.sln text eol=crlf
# Denote all files that are truly binary and should not be modified
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
*.svg text
*.wasm binary
*.woff binary
*.woff2 binary
*.ttf binary
*.eot binary
*.mp3 binary
*.mp4 binary
*.webm binary
*.zip binary
*.tar binary
*.gz binary
*.bz2 binary
*.7z binary
*.db binary
+24 -1
View File
@@ -133,6 +133,29 @@ jobs:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
check-all-features:
name: Check (all features)
runs-on: ubuntu-latest
timeout-minutes: 20
needs: [lint]
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install system dependencies
run: sudo apt-get update -qq && sudo apt-get install -y libudev-dev
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check all features
run: cargo check --all-features --locked
docs-quality:
name: Docs Quality
runs-on: ubuntu-latest
@@ -157,7 +180,7 @@ jobs:
gate:
name: CI Required Gate
if: always()
needs: [lint, lint-strict-delta, test, build, docs-quality]
needs: [lint, lint-strict-delta, test, build, docs-quality, check-all-features]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
@@ -74,4 +74,4 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features channel-matrix,channel-lark,memory-postgres --target ${{ matrix.target }}
+181
View File
@@ -0,0 +1,181 @@
name: Pub AUR Package
on:
workflow_call:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate PKGBUILD only (no push)"
required: false
default: false
type: boolean
secrets:
AUR_SSH_KEY:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate PKGBUILD only (no push)"
required: false
default: true
type: boolean
concurrency:
group: aur-publish-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-aur:
name: Update AUR Package
runs-on: ubuntu-latest
env:
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate and compute metadata
id: meta
shell: bash
run: |
set -euo pipefail
if [[ ! "$RELEASE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::release_tag must be vX.Y.Z format."
exit 1
fi
version="${RELEASE_TAG#v}"
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
if [[ -z "$tarball_sha" ]]; then
echo "::error::Could not compute SHA256 for source tarball."
exit 1
fi
{
echo "version=$version"
echo "tarball_url=$tarball_url"
echo "tarball_sha=$tarball_sha"
} >> "$GITHUB_OUTPUT"
{
echo "### AUR Package Metadata"
echo "- version: \`${version}\`"
echo "- tarball_url: \`${tarball_url}\`"
echo "- tarball_sha: \`${tarball_sha}\`"
} >> "$GITHUB_STEP_SUMMARY"
- name: Generate PKGBUILD
id: pkgbuild
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
TARBALL_SHA: ${{ steps.meta.outputs.tarball_sha }}
run: |
set -euo pipefail
pkgbuild_file="$(mktemp)"
sed -e "s/^pkgver=.*/pkgver=${VERSION}/" \
-e "s/^sha256sums=.*/sha256sums=('${TARBALL_SHA}')/" \
dist/aur/PKGBUILD > "$pkgbuild_file"
echo "pkgbuild_file=$pkgbuild_file" >> "$GITHUB_OUTPUT"
echo "### Generated PKGBUILD" >> "$GITHUB_STEP_SUMMARY"
echo '```bash' >> "$GITHUB_STEP_SUMMARY"
cat "$pkgbuild_file" >> "$GITHUB_STEP_SUMMARY"
echo '```' >> "$GITHUB_STEP_SUMMARY"
- name: Generate .SRCINFO
id: srcinfo
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
TARBALL_SHA: ${{ steps.meta.outputs.tarball_sha }}
run: |
set -euo pipefail
srcinfo_file="$(mktemp)"
sed -e "s/pkgver = .*/pkgver = ${VERSION}/" \
-e "s/sha256sums = .*/sha256sums = ${TARBALL_SHA}/" \
-e "s|zeroclaw-[0-9.]*.tar.gz|zeroclaw-${VERSION}.tar.gz|g" \
-e "s|/v[0-9.]*\.tar\.gz|/v${VERSION}.tar.gz|g" \
dist/aur/.SRCINFO > "$srcinfo_file"
echo "srcinfo_file=$srcinfo_file" >> "$GITHUB_OUTPUT"
- name: Push to AUR
if: inputs.dry_run == false
shell: bash
env:
AUR_SSH_KEY: ${{ secrets.AUR_SSH_KEY }}
PKGBUILD_FILE: ${{ steps.pkgbuild.outputs.pkgbuild_file }}
SRCINFO_FILE: ${{ steps.srcinfo.outputs.srcinfo_file }}
VERSION: ${{ steps.meta.outputs.version }}
run: |
set -euo pipefail
if [[ -z "${AUR_SSH_KEY}" ]]; then
echo "::error::Secret AUR_SSH_KEY is required for non-dry-run."
exit 1
fi
# Set up SSH key — normalize line endings and ensure trailing newline
mkdir -p ~/.ssh
chmod 700 ~/.ssh
printf '%s\n' "$AUR_SSH_KEY" | tr -d '\r' > ~/.ssh/aur
chmod 600 ~/.ssh/aur
cat > ~/.ssh/config <<'SSH_CONFIG'
Host aur.archlinux.org
IdentityFile ~/.ssh/aur
User aur
StrictHostKeyChecking accept-new
SSH_CONFIG
chmod 600 ~/.ssh/config
# Verify key is valid and print fingerprint for debugging
echo "::group::SSH key diagnostics"
ssh-keygen -l -f ~/.ssh/aur || { echo "::error::AUR_SSH_KEY is not a valid SSH private key"; exit 1; }
echo "::endgroup::"
# Test SSH connectivity before attempting clone
ssh -T -o BatchMode=yes -o ConnectTimeout=10 aur@aur.archlinux.org 2>&1 || true
tmp_dir="$(mktemp -d)"
git clone ssh://aur@aur.archlinux.org/zeroclaw.git "$tmp_dir/aur"
cp "$PKGBUILD_FILE" "$tmp_dir/aur/PKGBUILD"
cp "$SRCINFO_FILE" "$tmp_dir/aur/.SRCINFO"
cd "$tmp_dir/aur"
git config user.name "zeroclaw-bot"
git config user.email "bot@zeroclaw.dev"
git add PKGBUILD .SRCINFO
git commit -m "zeroclaw ${VERSION}"
git push origin HEAD
echo "AUR package updated to ${VERSION}"
- name: Summary
shell: bash
run: |
if [[ "$DRY_RUN" == "true" ]]; then
echo "Dry run complete: PKGBUILD generated, no push performed."
else
echo "Publish complete: AUR package pushed."
fi
+212
View File
@@ -0,0 +1,212 @@
name: Pub Homebrew Core
on:
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag to publish (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Patch formula only (no push/PR)"
required: false
default: true
type: boolean
concurrency:
group: homebrew-core-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-homebrew-core:
name: Publish Homebrew Core PR
runs-on: ubuntu-latest
env:
UPSTREAM_REPO: Homebrew/homebrew-core
FORMULA_PATH: Formula/z/zeroclaw.rb
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
BOT_FORK_REPO: ${{ vars.HOMEBREW_CORE_BOT_FORK_REPO }}
BOT_EMAIL: ${{ vars.HOMEBREW_CORE_BOT_EMAIL }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate release tag and version alignment
id: release_meta
shell: bash
run: |
set -euo pipefail
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
if [[ ! "$RELEASE_TAG" =~ $semver_pattern ]]; then
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])."
exit 1
fi
if ! git rev-parse "refs/tags/${RELEASE_TAG}" >/dev/null 2>&1; then
git fetch --tags origin
fi
tag_version="${RELEASE_TAG#v}"
cargo_version="$(git show "${RELEASE_TAG}:Cargo.toml" \
| sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
if [[ -z "$cargo_version" ]]; then
echo "::error::Unable to read Cargo.toml version from tag ${RELEASE_TAG}."
exit 1
fi
if [[ "$cargo_version" != "$tag_version" ]]; then
echo "::error::Tag ${RELEASE_TAG} does not match Cargo.toml version (${cargo_version})."
exit 1
fi
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
{
echo "tag_version=$tag_version"
echo "tarball_url=$tarball_url"
echo "tarball_sha=$tarball_sha"
} >> "$GITHUB_OUTPUT"
{
echo "### Release Metadata"
echo "- release_tag: \`${RELEASE_TAG}\`"
echo "- cargo_version: \`${cargo_version}\`"
echo "- tarball_sha256: \`${tarball_sha}\`"
echo "- dry_run: ${DRY_RUN}"
} >> "$GITHUB_STEP_SUMMARY"
- name: Patch Homebrew formula
id: patch_formula
shell: bash
env:
HOMEBREW_CORE_BOT_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
run: |
set -euo pipefail
tmp_repo="$(mktemp -d)"
echo "tmp_repo=$tmp_repo" >> "$GITHUB_OUTPUT"
if [[ "$DRY_RUN" == "true" ]]; then
git clone --depth=1 "https://github.com/${UPSTREAM_REPO}.git" "$tmp_repo/homebrew-core"
else
if [[ -z "${BOT_FORK_REPO}" ]]; then
echo "::error::Repository variable HOMEBREW_CORE_BOT_FORK_REPO is required when dry_run=false."
exit 1
fi
if [[ -z "${HOMEBREW_CORE_BOT_TOKEN}" ]]; then
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is required when dry_run=false."
exit 1
fi
if [[ "$BOT_FORK_REPO" != */* ]]; then
echo "::error::HOMEBREW_CORE_BOT_FORK_REPO must be in owner/repo format."
exit 1
fi
if ! gh api "repos/${BOT_FORK_REPO}" >/dev/null 2>&1; then
echo "::error::HOMEBREW_CORE_BOT_TOKEN cannot access ${BOT_FORK_REPO}."
exit 1
fi
gh repo clone "${BOT_FORK_REPO}" "$tmp_repo/homebrew-core" -- --depth=1
fi
repo_dir="$tmp_repo/homebrew-core"
formula_file="$repo_dir/$FORMULA_PATH"
if [[ ! -f "$formula_file" ]]; then
echo "::error::Formula file not found: $FORMULA_PATH"
exit 1
fi
if [[ "$DRY_RUN" == "false" ]]; then
if git -C "$repo_dir" remote get-url upstream >/dev/null 2>&1; then
git -C "$repo_dir" remote set-url upstream "https://github.com/${UPSTREAM_REPO}.git"
else
git -C "$repo_dir" remote add upstream "https://github.com/${UPSTREAM_REPO}.git"
fi
if git -C "$repo_dir" ls-remote --exit-code --heads upstream main >/dev/null 2>&1; then
upstream_ref="main"
else
upstream_ref="master"
fi
git -C "$repo_dir" fetch --depth=1 upstream "$upstream_ref"
branch_name="zeroclaw-${RELEASE_TAG}-${GITHUB_RUN_ID}"
git -C "$repo_dir" checkout -B "$branch_name" "upstream/$upstream_ref"
echo "branch_name=$branch_name" >> "$GITHUB_OUTPUT"
fi
tarball_url="$(grep 'tarball_url=' "$GITHUB_OUTPUT" | head -1 | cut -d= -f2-)"
tarball_sha="$(grep 'tarball_sha=' "$GITHUB_OUTPUT" | head -1 | cut -d= -f2-)"
perl -0pi -e "s|^ url \".*\"| url \"${tarball_url}\"|m" "$formula_file"
perl -0pi -e "s|^ sha256 \".*\"| sha256 \"${tarball_sha}\"|m" "$formula_file"
perl -0pi -e "s|^ license \".*\"| license \"Apache-2.0 OR MIT\"|m" "$formula_file"
# Ensure Node.js build dependency is declared so that build.rs can
# run `npm ci && npm run build` to produce the web frontend assets.
if ! grep -q 'depends_on "node" => :build' "$formula_file"; then
perl -0pi -e 's|( depends_on "rust" => :build\n)|\1 depends_on "node" => :build\n|m' "$formula_file"
fi
git -C "$repo_dir" diff -- "$FORMULA_PATH" > "$tmp_repo/formula.diff"
if [[ ! -s "$tmp_repo/formula.diff" ]]; then
echo "::error::No formula changes generated. Nothing to publish."
exit 1
fi
{
echo "### Formula Diff"
echo '```diff'
cat "$tmp_repo/formula.diff"
echo '```'
} >> "$GITHUB_STEP_SUMMARY"
- name: Push branch and open Homebrew PR
if: inputs.dry_run == false
shell: bash
env:
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
TMP_REPO: ${{ steps.patch_formula.outputs.tmp_repo }}
BRANCH_NAME: ${{ steps.patch_formula.outputs.branch_name }}
TAG_VERSION: ${{ steps.release_meta.outputs.tag_version }}
TARBALL_URL: ${{ steps.release_meta.outputs.tarball_url }}
TARBALL_SHA: ${{ steps.release_meta.outputs.tarball_sha }}
run: |
set -euo pipefail
repo_dir="${TMP_REPO}/homebrew-core"
fork_owner="${BOT_FORK_REPO%%/*}"
bot_email="${BOT_EMAIL:-${fork_owner}@users.noreply.github.com}"
git -C "$repo_dir" config user.name "$fork_owner"
git -C "$repo_dir" config user.email "$bot_email"
git -C "$repo_dir" add "$FORMULA_PATH"
git -C "$repo_dir" commit -m "zeroclaw ${TAG_VERSION}"
gh auth setup-git
git -C "$repo_dir" push --set-upstream origin "$BRANCH_NAME"
pr_body="Automated formula bump from ZeroClaw release workflow.
- Release tag: ${RELEASE_TAG}
- Source tarball: ${TARBALL_URL}
- Source sha256: ${TARBALL_SHA}"
gh pr create \
--repo "$UPSTREAM_REPO" \
--base main \
--head "${fork_owner}:${BRANCH_NAME}" \
--title "zeroclaw ${TAG_VERSION}" \
--body "$pr_body"
- name: Summary
shell: bash
run: |
if [[ "$DRY_RUN" == "true" ]]; then
echo "Dry run complete: formula diff generated, no push/PR performed."
else
echo "Publish complete: branch pushed and PR opened from bot fork."
fi
+165
View File
@@ -0,0 +1,165 @@
name: Pub Scoop Manifest
on:
workflow_call:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate manifest only (no push)"
required: false
default: false
type: boolean
secrets:
SCOOP_BUCKET_TOKEN:
required: false
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Generate manifest only (no push)"
required: false
default: true
type: boolean
concurrency:
group: scoop-publish-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-scoop:
name: Update Scoop Manifest
runs-on: ubuntu-latest
env:
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
SCOOP_BUCKET_REPO: ${{ vars.SCOOP_BUCKET_REPO }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Validate and compute metadata
id: meta
shell: bash
run: |
set -euo pipefail
if [[ ! "$RELEASE_TAG" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::release_tag must be vX.Y.Z format."
exit 1
fi
version="${RELEASE_TAG#v}"
zip_url="https://github.com/${GITHUB_REPOSITORY}/releases/download/${RELEASE_TAG}/zeroclaw-x86_64-pc-windows-msvc.zip"
sums_url="https://github.com/${GITHUB_REPOSITORY}/releases/download/${RELEASE_TAG}/SHA256SUMS"
sha256="$(curl -fsSL "$sums_url" | grep 'zeroclaw-x86_64-pc-windows-msvc.zip' | awk '{print $1}')"
if [[ -z "$sha256" ]]; then
echo "::error::Could not find Windows binary hash in SHA256SUMS for ${RELEASE_TAG}."
exit 1
fi
{
echo "version=$version"
echo "zip_url=$zip_url"
echo "sha256=$sha256"
} >> "$GITHUB_OUTPUT"
{
echo "### Scoop Manifest Metadata"
echo "- version: \`${version}\`"
echo "- zip_url: \`${zip_url}\`"
echo "- sha256: \`${sha256}\`"
} >> "$GITHUB_STEP_SUMMARY"
- name: Generate manifest
id: manifest
shell: bash
env:
VERSION: ${{ steps.meta.outputs.version }}
ZIP_URL: ${{ steps.meta.outputs.zip_url }}
SHA256: ${{ steps.meta.outputs.sha256 }}
run: |
set -euo pipefail
manifest_file="$(mktemp)"
cat > "$manifest_file" <<MANIFEST
{
"version": "${VERSION}",
"description": "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.",
"homepage": "https://github.com/zeroclaw-labs/zeroclaw",
"license": "MIT|Apache-2.0",
"architecture": {
"64bit": {
"url": "${ZIP_URL}",
"hash": "${SHA256}",
"bin": "zeroclaw.exe"
}
},
"checkver": {
"github": "https://github.com/zeroclaw-labs/zeroclaw"
},
"autoupdate": {
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v\$version/zeroclaw-x86_64-pc-windows-msvc.zip"
}
},
"hash": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v\$version/SHA256SUMS",
"regex": "([a-f0-9]{64})\\\\s+zeroclaw-x86_64-pc-windows-msvc\\\\.zip"
}
}
}
MANIFEST
jq '.' "$manifest_file" > "${manifest_file}.formatted"
mv "${manifest_file}.formatted" "$manifest_file"
echo "manifest_file=$manifest_file" >> "$GITHUB_OUTPUT"
echo "### Generated Manifest" >> "$GITHUB_STEP_SUMMARY"
echo '```json' >> "$GITHUB_STEP_SUMMARY"
cat "$manifest_file" >> "$GITHUB_STEP_SUMMARY"
echo '```' >> "$GITHUB_STEP_SUMMARY"
- name: Push to Scoop bucket
if: inputs.dry_run == false
shell: bash
env:
GH_TOKEN: ${{ secrets.SCOOP_BUCKET_TOKEN }}
MANIFEST_FILE: ${{ steps.manifest.outputs.manifest_file }}
VERSION: ${{ steps.meta.outputs.version }}
run: |
set -euo pipefail
if [[ -z "${SCOOP_BUCKET_REPO}" ]]; then
echo "::error::Repository variable SCOOP_BUCKET_REPO is required (e.g. zeroclaw-labs/scoop-zeroclaw)."
exit 1
fi
tmp_dir="$(mktemp -d)"
gh repo clone "${SCOOP_BUCKET_REPO}" "$tmp_dir/bucket" -- --depth=1
mkdir -p "$tmp_dir/bucket/bucket"
cp "$MANIFEST_FILE" "$tmp_dir/bucket/bucket/zeroclaw.json"
cd "$tmp_dir/bucket"
git config user.name "zeroclaw-bot"
git config user.email "bot@zeroclaw.dev"
git add bucket/zeroclaw.json
git commit -m "zeroclaw ${VERSION}"
gh auth setup-git
git push origin HEAD
echo "Scoop manifest updated to ${VERSION}"
+12 -1
View File
@@ -103,9 +103,20 @@ jobs:
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish to crates.io
run: cargo publish --locked --allow-dirty --no-verify
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
VERSION: ${{ needs.detect-version-change.outputs.version }}
run: |
# Publish to crates.io; treat "already exists" as success
# (manual publish or stable workflow may have already published)
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::zeroclawlabs@${VERSION} already on crates.io — skipping"
exit 0
fi
exit 1
- name: Verify published
shell: bash
+11 -1
View File
@@ -75,6 +75,16 @@ jobs:
- name: Publish to crates.io
if: "!inputs.dry_run"
run: cargo publish --locked --allow-dirty --no-verify
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
VERSION: ${{ inputs.version }}
run: |
# Publish to crates.io; treat "already exists" as success
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::zeroclawlabs@${VERSION} already on crates.io — skipping"
exit 0
fi
exit 1
+52 -16
View File
@@ -5,8 +5,8 @@ on:
branches: [master]
concurrency:
group: release
cancel-in-progress: false
group: release-beta
cancel-in-progress: true
permissions:
contents: write
@@ -16,6 +16,7 @@ env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
RELEASE_CARGO_FEATURES: channel-matrix,channel-lark,memory-postgres
jobs:
version:
@@ -213,7 +214,7 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features "${{ env.RELEASE_CARGO_FEATURES }}" --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
@@ -294,10 +295,44 @@ jobs:
name: Push Docker Image
needs: [version, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -309,21 +344,22 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
# ── Post-publish: only run after ALL artifacts are live ──────────────
tweet:
name: Tweet Release
needs: [version, publish, docker, redeploy-website]
uses: ./.github/workflows/tweet-release.yml
with:
release_tag: ${{ needs.version.outputs.tag }}
release_url: https://github.com/zeroclaw-labs/zeroclaw/releases/tag/${{ needs.version.outputs.tag }}
secrets: inherit
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta-debian
platforms: linux/amd64,linux/arm64
# Tweet removed — only stable releases should tweet (see tweet-release.yml).
+74 -7
View File
@@ -20,6 +20,7 @@ env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
RELEASE_CARGO_FEATURES: channel-matrix,channel-lark,memory-postgres
jobs:
validate:
@@ -214,7 +215,7 @@ jobs:
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
cargo build --release --locked --features "${{ env.RELEASE_CARGO_FEATURES }}" --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
@@ -337,10 +338,44 @@ jobs:
name: Push Docker Image
needs: [validate, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
cp Dockerfile.debian.ci docker-ctx/Dockerfile.debian
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -352,19 +387,51 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
# ── Post-publish: only run after ALL artifacts are live ──────────────
- name: Build and push Debian compatibility image
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: docker-ctx
file: docker-ctx/Dockerfile.debian
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}-debian
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:debian
platforms: linux/amd64,linux/arm64
# ── Post-publish: package manager auto-sync ─────────────────────────
scoop:
name: Update Scoop Manifest
needs: [validate, publish]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/pub-scoop.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
dry_run: false
secrets: inherit
aur:
name: Update AUR Package
needs: [validate, publish]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/pub-aur.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
dry_run: false
secrets: inherit
# ── Post-publish: tweet after release + website are live ──────────────
# Docker push can be slow; don't let it block the tweet.
tweet:
name: Tweet Release
needs: [validate, publish, docker, redeploy-website]
needs: [validate, publish, redeploy-website]
if: ${{ !cancelled() && needs.publish.result == 'success' }}
uses: ./.github/workflows/tweet-release.yml
with:
release_tag: ${{ needs.validate.outputs.tag }}
+35 -42
View File
@@ -5,7 +5,7 @@ on:
workflow_call:
inputs:
release_tag:
description: "Release tag (e.g. v0.3.0 or v0.3.0-beta.42)"
description: "Stable release tag (e.g. v0.3.0)"
required: true
type: string
release_url:
@@ -53,9 +53,18 @@ jobs:
exit 0
fi
# Find the PREVIOUS release tag (including betas) to check for new features
# Stable releases (no -beta suffix) always tweet — they represent
# the full release cycle, so skipping them loses visibility.
if [[ ! "$RELEASE_TAG" =~ -beta\. ]]; then
echo "Stable release ${RELEASE_TAG} — always tweet"
echo "skip=false" >> "$GITHUB_OUTPUT"
exit 0
fi
# Find the previous STABLE release tag (exclude betas) to check for new features
PREV_TAG=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
@@ -63,15 +72,15 @@ jobs:
exit 0
fi
# Count new feat() commits since the previous release
NEW_FEATS=$(git log "${PREV_TAG}..${RELEASE_TAG}" --pretty=format:"%s" --no-merges \
| grep -ciE '^feat(\(|:)' || echo "0")
# Count new feat() OR fix() commits since the previous release
NEW_CHANGES=$(git log "${PREV_TAG}..${RELEASE_TAG}" --pretty=format:"%s" --no-merges \
| grep -ciE '^(feat|fix)(\(|:)' || echo "0")
if [ "$NEW_FEATS" -eq 0 ]; then
echo "No new features since ${PREV_TAG} — skipping tweet"
if [ "$NEW_CHANGES" -eq 0 ]; then
echo "No new features or fixes since ${PREV_TAG} — skipping tweet"
echo "skip=true" >> "$GITHUB_OUTPUT"
else
echo "${NEW_FEATS} new feature(s) since ${PREV_TAG} — tweeting"
echo "${NEW_CHANGES} new change(s) since ${PREV_TAG} — tweeting"
echo "skip=false" >> "$GITHUB_OUTPUT"
fi
@@ -89,53 +98,37 @@ jobs:
if [ -n "$MANUAL_TEXT" ]; then
TWEET="$MANUAL_TEXT"
else
# For features: diff against the PREVIOUS release (including betas)
# This prevents duplicate feature lists across consecutive betas
PREV_RELEASE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| head -1 || echo "")
# For contributors: diff against the last STABLE release
# This captures everyone across the full release cycle
# Diff against the last STABLE release (exclude betas) to capture
# ALL features accumulated across the full beta cycle
PREV_STABLE=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
FEAT_RANGE="${PREV_RELEASE:+${PREV_RELEASE}..}${RELEASE_TAG}"
CONTRIB_RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
RANGE="${PREV_STABLE:+${PREV_STABLE}..}${RELEASE_TAG}"
# Extract NEW features only since the last release
FEATURES=$(git log "$FEAT_RANGE" --pretty=format:"%s" --no-merges \
# Extract ALL features since the last stable release
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| head -4 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="🚀 Incremental improvements and polish"
fi
# Count ALL contributors across the full release cycle
GIT_AUTHORS=$(git log "$CONTRIB_RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$CONTRIB_RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
TOTAL_COUNT=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| grep -c . || echo "0")
FEAT_COUNT=$(echo "$FEATURES" | grep -c . || echo "0")
# Build tweet — new features, contributor count, hashtags
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\n🙌 %s contributors\n\n%s\n\n#zeroclaw #rust #ai #opensource" \
"$RELEASE_TAG" "$FEATURES" "$TOTAL_COUNT" "$RELEASE_URL")
# Format top features with rocket emoji (limit to 6 for tweet space)
FEAT_LIST=$(echo "$FEATURES" \
| head -6 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEAT_LIST" ]; then
FEAT_LIST="🚀 Incremental improvements and polish"
fi
# Build tweet — feature-focused style
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\nZero overhead. Zero compromise. 100%% Rust.\n\n#zeroclaw #rust #ai #opensource" \
"$RELEASE_TAG" "$FEAT_LIST")
fi
# X/Twitter counts any URL as 23 chars (t.co shortening).
+2 -1
View File
@@ -1,7 +1,8 @@
/target
/target-*/
firmware/*/target
web/dist/
web/dist/*
!web/dist/.gitkeep
*.db
*.db-journal
.DS_Store
Generated
+1296 -46
View File
File diff suppressed because it is too large Load Diff
+25 -11
View File
@@ -4,7 +4,7 @@ resolver = "2"
[package]
name = "zeroclawlabs"
version = "0.4.0"
version = "0.5.1"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
@@ -14,15 +14,6 @@ readme = "README.md"
keywords = ["ai", "agent", "cli", "assistant", "chatbot"]
categories = ["command-line-utilities", "api-bindings"]
rust-version = "1.87"
[[bin]]
name = "zeroclaw"
path = "src/main.rs"
[lib]
name = "zeroclaw"
path = "src/lib.rs"
include = [
"/src/**/*",
"/build.rs",
@@ -31,8 +22,17 @@ include = [
"/LICENSE*",
"/README.md",
"/web/dist/**/*",
"/tool_descriptions/**/*",
]
[[bin]]
name = "zeroclaw"
path = "src/main.rs"
[lib]
name = "zeroclaw"
path = "src/lib.rs"
[dependencies]
# CLI - minimal and fast
clap = { version = "4.5", features = ["derive"] }
@@ -53,6 +53,7 @@ matrix-sdk = { version = "0.16", optional = true, default-features = false, feat
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde_json = { version = "1.0", default-features = false, features = ["std"] }
serde_ignored = "0.1"
serde_yaml = "0.9"
# Config
directories = "6.0"
@@ -82,6 +83,12 @@ nanohtml2text = "0.2"
# Optional Rust-native browser automation backend
fantoccini = { version = "0.22.1", optional = true, default-features = false, features = ["rustls-tls"] }
# Progress bars (update pipeline)
indicatif = "0.17"
# Temp files (update pipeline rollback)
tempfile = "3.26"
# Error handling
anyhow = "1.0"
thiserror = "2.0"
@@ -183,6 +190,9 @@ probe-rs = { version = "0.31", optional = true }
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
pdf-extract = { version = "0.10", optional = true }
# WASM plugin runtime (extism)
extism = { version = "1.9", optional = true }
# Terminal QR rendering for WhatsApp Web pairing flow.
qrcode = { version = "0.14", optional = true }
@@ -205,7 +215,7 @@ landlock = { version = "0.4", optional = true }
libc = "0.2"
[features]
default = ["observability-prometheus", "channel-nostr"]
default = ["observability-prometheus", "channel-nostr", "skill-creation"]
channel-nostr = ["dep:nostr-sdk"]
hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
@@ -230,8 +240,12 @@ metrics = ["observability-prometheus"]
probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG
rag-pdf = ["dep:pdf-extract"]
# skill-creation = Autonomous skill creation from successful multi-step tasks
skill-creation = []
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
# WASM plugin system (extism-based)
plugins-wasm = ["dep:extism"]
[profile.release]
opt-level = "z" # Optimize for size
+41 -31
View File
@@ -1,9 +1,18 @@
# syntax=docker/dockerfile:1.7
# ── Stage 0: Frontend build ─────────────────────────────────────
FROM node:22-alpine AS web-builder
WORKDIR /web
COPY web/package.json web/package-lock.json* ./
RUN npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts
COPY web/ .
RUN npm run build
# ── Stage 1: Build ────────────────────────────────────────────
FROM rust:1.93-slim@sha256:9663b80a1621253d30b146454f903de48f0af925c967be48c84745537cd35d8b AS builder
FROM rust:1.94-slim@sha256:da9dab7a6b8dd428e71718402e97207bb3e54167d37b5708616050b1e8f60ed6 AS builder
WORKDIR /app
ARG ZEROCLAW_CARGO_FEATURES="memory-postgres"
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
@@ -14,48 +23,45 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
&& echo "fn main() {}" > benches/agent_benchmarks.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi
RUN rm -rf src benches
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY crates/ crates/
COPY firmware/ firmware/
COPY web/ web/
# Keep release builds resilient when frontend dist assets are not prebuilt in Git.
RUN mkdir -p web/dist && \
if [ ! -f web/dist/index.html ]; then \
printf '%s\n' \
'<!doctype html>' \
'<html lang="en">' \
' <head>' \
' <meta charset="utf-8" />' \
' <meta name="viewport" content="width=device-width,initial-scale=1" />' \
' <title>ZeroClaw Dashboard</title>' \
' </head>' \
' <body>' \
' <h1>ZeroClaw Dashboard Unavailable</h1>' \
' <p>Frontend assets are not bundled in this build. Build the web UI to populate <code>web/dist</code>.</p>' \
' </body>' \
'</html>' > web/dist/index.html; \
fi
COPY --from=web-builder /web/dist web/dist
COPY *.rs .
RUN touch src/main.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked && \
rm -rf target/release/.fingerprint/zeroclawlabs-* \
target/release/deps/zeroclawlabs-* \
target/release/incremental/zeroclawlabs-* && \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
# Prepare runtime directory structure and default config inline (no extra stage)
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
@@ -107,11 +113,13 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
CMD ["daemon"]
# ── Stage 3: Production Runtime (Distroless) ─────────────────
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:84fcd3c223b144b0cb6edc5ecc75641819842a9679a3a58fd6294bec47532bf7 AS release
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:9c4fe2381c2e6d53c4cfdefeff6edbd2a67ec7713e2c3ca6653806cbdbf27a1e AS release
COPY --from=builder /app/zeroclaw /usr/local/bin/zeroclaw
COPY --from=builder /zeroclaw-data /zeroclaw-data
@@ -131,5 +139,7 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
CMD ["daemon"]
+25
View File
@@ -0,0 +1,25 @@
# Dockerfile.ci — CI/release image using pre-built binaries.
# Used by release workflows to skip the ~60 min Rust compilation.
# The main Dockerfile is still used for local dev builds.
# ── Runtime (Distroless) ─────────────────────────────────────
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:9c4fe2381c2e6d53c4cfdefeff6edbd2a67ec7713e2c3ca6653806cbdbf27a1e
ARG TARGETARCH
# Copy the pre-built binary for this platform (amd64 or arm64)
COPY bin/${TARGETARCH}/zeroclaw /usr/local/bin/zeroclaw
# Runtime directory structure and default config
COPY --chown=65534:65534 zeroclaw-data/ /zeroclaw-data/
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+35 -30
View File
@@ -1,5 +1,13 @@
# syntax=docker/dockerfile:1.7
# ── Stage 0: Frontend build ─────────────────────────────────────
FROM node:22-alpine AS web-builder
WORKDIR /web
COPY web/package.json web/package-lock.json* ./
RUN npm ci --ignore-scripts 2>/dev/null || npm install --ignore-scripts
COPY web/ .
RUN npm run build
# Dockerfile.debian — Shell-equipped variant of the ZeroClaw container.
#
# The default Dockerfile produces a distroless "release" image with no shell,
@@ -15,10 +23,11 @@
# Or with docker compose:
# docker compose -f docker-compose.yml -f docker-compose.debian.yml up
# ── Stage 1: Build (identical to main Dockerfile) ───────────
FROM rust:1.93-slim@sha256:9663b80a1621253d30b146454f903de48f0af925c967be48c84745537cd35d8b AS builder
# ── Stage 1: Build (match runtime glibc baseline) ───────────
FROM rust:1.94-bookworm AS builder
WORKDIR /app
ARG ZEROCLAW_CARGO_FEATURES="memory-postgres"
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
@@ -29,47 +38,41 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi
RUN rm -rf src benches
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY crates/ crates/
COPY firmware/ firmware/
COPY web/ web/
# Keep release builds resilient when frontend dist assets are not prebuilt in Git.
RUN mkdir -p web/dist && \
if [ ! -f web/dist/index.html ]; then \
printf '%s\n' \
'<!doctype html>' \
'<html lang="en">' \
' <head>' \
' <meta charset="utf-8" />' \
' <meta name="viewport" content="width=device-width,initial-scale=1" />' \
' <title>ZeroClaw Dashboard</title>' \
' </head>' \
' <body>' \
' <h1>ZeroClaw Dashboard Unavailable</h1>' \
' <p>Frontend assets are not bundled in this build. Build the web UI to populate <code>web/dist</code>.</p>' \
' </body>' \
'</html>' > web/dist/index.html; \
fi
COPY --from=web-builder /web/dist web/dist
RUN touch src/main.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked && \
if [ -n "$ZEROCLAW_CARGO_FEATURES" ]; then \
cargo build --release --locked --features "$ZEROCLAW_CARGO_FEATURES"; \
else \
cargo build --release --locked; \
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
# Prepare runtime directory structure and default config inline (no extra stage)
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
@@ -116,5 +119,7 @@ ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
HEALTHCHECK --interval=60s --timeout=10s --retries=3 --start-period=10s \
CMD ["zeroclaw", "status", "--format=exit-code"]
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
CMD ["daemon"]
+34
View File
@@ -0,0 +1,34 @@
# Dockerfile.debian.ci — CI/release Debian image using pre-built binaries.
# Mirrors Dockerfile.ci but uses debian:bookworm-slim with shell tools
# so the agent can use shell-based tools (pwd, ls, git, curl, etc.).
# Used by release workflows to skip ~60 min QEMU cross-compilation.
# ── Runtime (Debian with shell) ────────────────────────────────
FROM debian:bookworm-slim
ARG TARGETARCH
# Install essential tools for agent shell operations
RUN apt-get update && apt-get install -y --no-install-recommends \
bash \
ca-certificates \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the pre-built binary for this platform (amd64 or arm64)
COPY bin/${TARGETARCH}/zeroclaw /usr/local/bin/zeroclaw
# Runtime directory structure and default config
COPY --chown=65534:65534 zeroclaw-data/ /zeroclaw-data/
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+78
View File
@@ -0,0 +1,78 @@
# Justfile - Convenient command runner for ZeroClaw development
# https://github.com/casey/just
# Default recipe to display help
_default:
@just --list
# Format all code
fmt:
cargo fmt --all
# Check formatting without making changes
fmt-check:
cargo fmt --all -- --check
# Run clippy lints
lint:
cargo clippy --all-targets -- -D warnings
# Run all tests
test:
cargo test --locked
# Run only unit tests (faster)
test-lib:
cargo test --lib
# Run the full CI quality gate locally
ci: fmt-check lint test
@echo "✅ All CI checks passed!"
# Build in release mode
build:
cargo build --release --locked
# Build in debug mode
build-debug:
cargo build
# Clean build artifacts
clean:
cargo clean
# Run zeroclaw with example config (for development)
dev *ARGS:
cargo run -- {{ARGS}}
# Check code without building
check:
cargo check --all-targets
# Run cargo doc and open in browser
doc:
cargo doc --no-deps --open
# Update dependencies
update:
cargo update
# Run cargo audit to check for security vulnerabilities
audit:
cargo audit
# Run cargo deny checks
deny:
cargo deny check
# Format TOML files (requires taplo)
fmt-toml:
taplo format
# Check TOML formatting (requires taplo)
fmt-toml-check:
taplo format --check
# Run all formatting tools
fmt-all: fmt fmt-toml
@echo "✅ All formatting complete!"
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center" dir="rtl">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -16,7 +16,11 @@
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center" dir="rtl">
@@ -103,7 +107,7 @@
| التاريخ (UTC) | المستوى | الإشعار | الإجراء |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _حرج_ | **نحن غير مرتبطين** بـ `openagen/zeroclaw` أو `zeroclaw.org`. نطاق `zeroclaw.org` يشير حاليًا إلى الفرع `openagen/zeroclaw`، وهذا النطاق/المستودع ينتحل شخصية موقعنا/مشروعنا الرسمي. | لا تثق بالمعلومات أو الملفات الثنائية أو جمع التبرعات أو الإعلانات من هذه المصادر. استخدم فقط [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) وحساباتنا الموثقة على وسائل التواصل الاجتماعي. |
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclaw)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclawlabs)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
| 2026-02-19 | _مهم_ | قامت Anthropic بتحديث شروط استخدام المصادقة وبيانات الاعتماد في 2026-02-19. مصادقة OAuth (Free، Pro، Max) حصريًا لـ Claude Code و Claude.ai؛ استخدام رموز Claude Free/Pro/Max OAuth في أي منتج أو أداة أو خدمة أخرى (بما في ذلك Agent SDK) غير مسموح به وقد ينتهك شروط استخدام المستهلك. | يرجى تجنب مؤقتًا تكاملات Claude Code OAuth لمنع أي خسارة محتملة. البند الأصلي: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ الميزات
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # تدوير سر الاقتران الحالي
zeroclaw tunnel start # بدء نفق إلى البرنامج الخفي المحلي
zeroclaw tunnel stop # إيقاف النفق النشط
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# التشخيص
zeroclaw doctor # تشغيل فحوصات صحة النظام
zeroclaw version # عرض الإصدار ومعلومات البناء
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# চালান
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Docker দিয়ে
@@ -177,7 +185,7 @@ channels:
## কমিউনিটি
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Použijte tuto tabulku pro důležitá oznámení (změny kompatibility, bezpeč
| Datum (UTC) | Úroveň | Oznámení | Akce |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritické_ | **Nejsme propojeni** s `openagen/zeroclaw` nebo `zeroclaw.org`. Doména `zeroclaw.org` aktuálně směřuje na fork `openagen/zeroclaw`, a tato doména/repoziťář se vydává za náš oficiální web/projekt. | Nevěřte informacím, binárním souborům, fundraisingu nebo oznámením z těchto zdrojů. Používejte pouze [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) a naše ověřené sociální účty. |
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
| 2026-02-19 | _Důležité_ | Anthropic aktualizoval podmínky použití autentizace a přihlašovacích údajů dne 2026-02-19. OAuth autentizace (Free, Pro, Max) je výhradně pro Claude Code a Claude.ai; použití Claude Free/Pro/Max OAuth tokenů v jakémkoliv jiném produktu, nástroji nebo službě (včetně Agent SDK) není povoleno a může porušit Podmínky použití spotřebitele. | Prosím dočasně se vyhněte Claude Code OAuth integracím pro předcházení potenciálním ztrátám. Původní klauzule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funkce
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Rotuje existující párovací tajemství
zeroclaw tunnel start # Spouští tunnel k lokálnímu daemon
zeroclaw tunnel stop # Zastavuje aktivní tunnel
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostika
zeroclaw doctor # Spouští kontroly zdraví systému
zeroclaw version # Zobrazuje verzi a build informace
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Kør
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Med Docker
@@ -177,7 +185,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
## Fællesskab
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -107,7 +111,7 @@ Verwende diese Tabelle für wichtige Hinweise (Kompatibilitätsänderungen, Sich
| Datum (UTC) | Ebene | Hinweis | Aktion |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritisch_ | Wir sind **nicht verbunden** mit `openagen/zeroclaw` oder `zeroclaw.org`. Die Domain `zeroclaw.org` zeigt derzeit auf den Fork `openagen/zeroclaw`, und diese Domain/Repository fälscht unsere offizielle Website/Projekt. | Vertraue keinen Informationen, Binärdateien, Fundraising oder Ankündigungen aus diesen Quellen. Verwende nur [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) und unsere verifizierten Social-Media-Konten. |
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
| 2026-02-19 | _Wichtig_ | Anthropic hat die Nutzungsbedingungen für Authentifizierung und Anmeldedaten am 2026-02-19 aktualisiert. Die OAuth-Authentifizierung (Free, Pro, Max) ist ausschließlich für Claude Code und Claude.ai; die Verwendung von Claude Free/Pro/Max OAuth-Token in einem anderen Produkt, Tool oder Dienst (einschließlich Agent SDK) ist nicht erlaubt und kann gegen die Verbrauchernutzungsbedingungen verstoßen. | Bitte vermeide vorübergehend Claude Code OAuth-Integrationen, um potenzielle Verluste zu verhindern. Originalklausel: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funktionen
@@ -370,6 +374,10 @@ zeroclaw pairing rotate # Rotiert das bestehende Pairing-Geheimnis
zeroclaw tunnel start # Startet einen Tunnel zum lokalen Daemon
zeroclaw tunnel stop # Stoppt den aktiven Tunnel
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnose
zeroclaw doctor # Führt System-Gesundheitsprüfungen durch
zeroclaw version # Zeigt Version und Build-Informationen
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -14,7 +14,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -100,6 +104,10 @@ cargo build --release
# Εκτέλεση
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Με Docker
@@ -176,7 +184,7 @@ channels:
## Κοινότητα
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Usa esta tabla para avisos importantes (cambios de compatibilidad, avisos de seg
| Fecha (UTC) | Nivel | Aviso | Acción |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Crítico_ | **No estamos afiliados** con `openagen/zeroclaw` o `zeroclaw.org`. El dominio `zeroclaw.org` apunta actualmente al fork `openagen/zeroclaw`, y este dominio/repositorio está suplantando nuestro sitio web/proyecto oficial. | No confíes en información, binarios, recaudaciones de fondos o anuncios de estas fuentes. Usa solo [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) y nuestras cuentas sociales verificadas. |
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
| 2026-02-19 | _Importante_ | Anthropic actualizó los términos de uso de autenticación y credenciales el 2026-02-19. La autenticación OAuth (Free, Pro, Max) es exclusivamente para Claude Code y Claude.ai; el uso de tokens OAuth de Claude Free/Pro/Max en cualquier otro producto, herramienta o servicio (incluyendo Agent SDK) no está permitido y puede violar los Términos de Uso del Consumidor. | Por favor, evita temporalmente las integraciones OAuth de Claude Code para prevenir cualquier pérdida potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Características
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Rota el secreto de emparejamiento existente
zeroclaw tunnel start # Inicia un tunnel hacia el daemon local
zeroclaw tunnel stop # Detiene el tunnel activo
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnóstico
zeroclaw doctor # Ejecuta verificaciones de salud del sistema
zeroclaw version # Muestra versión e información de build
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Aja
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Dockerilla
@@ -177,7 +185,7 @@ Katso [LICENSE-APACHE](LICENSE-APACHE) ja [LICENSE-MIT](LICENSE-MIT) yksityiskoh
## Yhteisö
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -14,7 +14,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributeurs" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Offrez-moi un café" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X : @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit : r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -101,7 +105,7 @@ Utilisez ce tableau pour les avis importants (changements incompatibles, avis de
| Date (UTC) | Niveau | Avis | Action |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critique_ | Nous ne sommes **pas affiliés** à `openagen/zeroclaw` ou `zeroclaw.org`. Le domaine `zeroclaw.org` pointe actuellement vers le fork `openagen/zeroclaw`, et ce domaine/dépôt usurpe l'identité de notre site web/projet officiel. | Ne faites pas confiance aux informations, binaires, levées de fonds ou annonces provenant de ces sources. Utilisez uniquement [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) et nos comptes sociaux vérifiés. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclaw), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclawlabs), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
| 2026-02-19 | _Important_ | Anthropic a mis à jour les conditions d'utilisation de l'authentification et des identifiants le 2026-02-19. L'authentification OAuth (Free, Pro, Max) est exclusivement destinée à Claude Code et Claude.ai ; l'utilisation de tokens OAuth de Claude Free/Pro/Max dans tout autre produit, outil ou service (y compris Agent SDK) n'est pas autorisée et peut violer les Conditions d'utilisation grand public. | Veuillez temporairement éviter les intégrations OAuth de Claude Code pour prévenir toute perte potentielle. Clause originale : [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Fonctionnalités
@@ -364,6 +368,10 @@ zeroclaw pairing rotate # Fait tourner le secret de pairing existant
zeroclaw tunnel start # Démarre un tunnel vers le daemon local
zeroclaw tunnel stop # Arrête le tunnel actif
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostic
zeroclaw doctor # Exécute les vérifications de santé du système
zeroclaw version # Affiche la version et les informations de build
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center" dir="rtl">
@@ -107,6 +111,10 @@ cargo build --release
# הפעל
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### עם Docker
@@ -193,7 +201,7 @@ channels:
## קהילה
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# चलाएं
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Docker के साथ
@@ -177,7 +185,7 @@ channels:
## समुदाय
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Futtatás
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Docker-rel
@@ -177,7 +185,7 @@ Részletekért lásd a [LICENSE-APACHE](LICENSE-APACHE) és [LICENSE-MIT](LICENS
## Közösség
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Jalankan
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Dengan Docker
@@ -177,7 +185,7 @@ Lihat [LICENSE-APACHE](LICENSE-APACHE) dan [LICENSE-MIT](LICENSE-MIT) untuk deta
## Komunitas
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Usa questa tabella per avvisi importanti (cambiamenti di compatibilità, avvisi
| Data (UTC) | Livello | Avviso | Azione |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critico_ | **Non siamo affiliati** con `openagen/zeroclaw` o `zeroclaw.org`. Il dominio `zeroclaw.org` punta attualmente al fork `openagen/zeroclaw`, e questo dominio/repository sta contraffacendo il nostro sito web/progetto ufficiale. | Non fidarti di informazioni, binari, raccolte fondi o annunci da queste fonti. Usa solo [questo repository](https://github.com/zeroclaw-labs/zeroclaw) e i nostri account social verificati. |
| 2026-02-21 | _Importante_ | Il nostro sito ufficiale è ora online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Grazie per la pazienza durante l'attesa. Rileviamo ancora tentativi di contraffazione: non partecipare ad alcuna attività di investimento/finanziamento a nome di ZeroClaw se non pubblicata tramite i nostri canali ufficiali. | Usa [questo repository](https://github.com/zeroclaw-labs/zeroclaw) come unica fonte di verità. Segui [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (gruppo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) per aggiornamenti ufficiali. |
| 2026-02-21 | _Importante_ | Il nostro sito ufficiale è ora online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Grazie per la pazienza durante l'attesa. Rileviamo ancora tentativi di contraffazione: non partecipare ad alcuna attività di investimento/finanziamento a nome di ZeroClaw se non pubblicata tramite i nostri canali ufficiali. | Usa [questo repository](https://github.com/zeroclaw-labs/zeroclaw) come unica fonte di verità. Segui [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (gruppo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) per aggiornamenti ufficiali. |
| 2026-02-19 | _Importante_ | Anthropic ha aggiornato i termini di utilizzo di autenticazione e credenziali il 2026-02-19. L'autenticazione OAuth (Free, Pro, Max) è esclusivamente per Claude Code e Claude.ai; l'uso di token OAuth di Claude Free/Pro/Max in qualsiasi altro prodotto, strumento o servizio (incluso Agent SDK) non è consentito e può violare i Termini di Utilizzo del Consumatore. | Si prega di evitare temporaneamente le integrazioni OAuth di Claude Code per prevenire qualsiasi potenziale perdita. Clausola originale: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funzionalità
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Ruota il segreto di pairing esistente
zeroclaw tunnel start # Avvia un tunnel verso il daemon locale
zeroclaw tunnel stop # Ferma il tunnel attivo
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostica
zeroclaw doctor # Esegue controlli di salute del sistema
zeroclaw version # Mostra versione e informazioni di build
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀(日本語)</h1>
@@ -13,7 +13,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +96,7 @@
| 日付 (UTC) | レベル | お知らせ | 対応 |
|---|---|---|---|
| 2026-02-19 | _緊急_ | 私たちは `openagen/zeroclaw` および `zeroclaw.org` とは**一切関係ありません**。`zeroclaw.org` は現在 `openagen/zeroclaw` の fork を指しており、そのドメイン/リポジトリは当プロジェクトの公式サイト・公式プロジェクトを装っています。 | これらの情報源による案内、バイナリ、資金調達情報、公式発表は信頼しないでください。必ず[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)と認証済み公式SNSのみを参照してください。 |
| 2026-02-21 | _重要_ | 公式サイトを公開しました: [zeroclawlabs.ai](https://zeroclawlabs.ai)。公開までお待ちいただきありがとうございました。引き続きなりすましの試みを確認しているため、ZeroClaw 名義の投資・資金調達などの案内は、公式チャネルで確認できない限り参加しないでください。 | 情報は[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)を最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(グループ)](https://www.facebook.com/groups/zeroclaw)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-21 | _重要_ | 公式サイトを公開しました: [zeroclawlabs.ai](https://zeroclawlabs.ai)。公開までお待ちいただきありがとうございました。引き続きなりすましの試みを確認しているため、ZeroClaw 名義の投資・資金調達などの案内は、公式チャネルで確認できない限り参加しないでください。 | 情報は[本リポジトリ](https://github.com/zeroclaw-labs/zeroclaw)を最優先で確認し、[X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(グループ)](https://www.facebook.com/groups/zeroclawlabs)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) と [小紅書アカウント](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) で公式更新を確認してください。 |
| 2026-02-19 | _重要_ | Anthropic は 2026-02-19 に Authentication and Credential Use を更新しました。条文では、OAuth authenticationFree/Pro/Max)は Claude Code と Claude.ai 専用であり、Claude Free/Pro/Max で取得した OAuth トークンを他の製品・ツール・サービス(Agent SDK を含む)で使用することは許可されず、Consumer Terms of Service 違反に該当すると明記されています。 | 損失回避のため、当面は Claude Code OAuth 連携を試さないでください。原文: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 概要
@@ -181,6 +185,10 @@ zeroclaw agent -m "Hello, ZeroClaw!"
zeroclaw gateway
zeroclaw daemon
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
## Subscription AuthOpenAI Codex / Claude Code
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Harvard, MIT, 그리고 Sundai.Club 커뮤니티의 학생들과 멤버들이
| 날짜 (UTC) | 수준 | 공지 | 조치 |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _중요_ | 우리는 `openagen/zeroclaw` 또는 `zeroclaw.org`**관련이 없습니다**. `zeroclaw.org` 도메인은 현재 `openagen/zeroclaw` 포크를 가리키고 있으며, 이 도메인/저장소는 우리의 공식 웹사이트/프로젝트를 사칭하고 있습니다. | 이 소스의 정보, 바이너리, 펀딩, 공지를 신뢰하지 마세요. [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)와 우리의 확인된 소셜 계정만 사용하세요. |
| 2026-02-21 | _중요_ | 우리의 공식 웹사이트가 이제 온라인입니다: [zeroclawlabs.ai](https://zeroclawlabs.ai). 기다려주셔서 감사합니다. 여전히 사칭 시도가 감지되고 있습니다: 공식 채널을 통해 게시되지 않은 ZeroClaw 이름의 모든 투자/펀딩 활동에 참여하지 마세요. | [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)를 유일한 진실의 원천으로 사용하세요. [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (그룹)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), 그리고 [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)를 팔로우하여 공식 업데이트를 받으세요. |
| 2026-02-21 | _중요_ | 우리의 공식 웹사이트가 이제 온라인입니다: [zeroclawlabs.ai](https://zeroclawlabs.ai). 기다려주셔서 감사합니다. 여전히 사칭 시도가 감지되고 있습니다: 공식 채널을 통해 게시되지 않은 ZeroClaw 이름의 모든 투자/펀딩 활동에 참여하지 마세요. | [이 저장소](https://github.com/zeroclaw-labs/zeroclaw)를 유일한 진실의 원천으로 사용하세요. [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (그룹)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), 그리고 [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)를 팔로우하여 공식 업데이트를 받으세요. |
| 2026-02-19 | _중요_ | Anthropic이 2026-02-19에 인증 및 자격증명 사용 약관을 업데이트했습니다. OAuth 인증(Free, Pro, Max)은 Claude Code 및 Claude.ai 전용입니다. 다른 제품, 도구 또는 서비스(Agent SDK 포함)에서 Claude Free/Pro/Max OAuth 토큰을 사용하는 것은 허용되지 않으며 소비자 이용약관을 위반할 수 있습니다. | 잠재적인 손실을 방지하기 위해 일시적으로 Claude Code OAuth 통합을 피하세요. 원본 조항: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ 기능
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # 기존 페어링 시크릿 교체
zeroclaw tunnel start # 로컬 데몬으로 터널 시작
zeroclaw tunnel stop # 활성 터널 중지
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# 진단
zeroclaw doctor # 시스템 상태 검사 실행
zeroclaw version # 버전 및 빌드 정보 표시
+7 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -14,7 +14,11 @@
<a href="https://github.com/zeroclaw-labs/zeroclaw/graphs/contributors"><img src="https://img.shields.io/github/contributors/zeroclaw-labs/zeroclaw?color=green" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -94,7 +98,7 @@ Use this board for important notices (breaking changes, security advisories, mai
| Date (UTC) | Level | Notice | Action |
| ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critical_ | We are **not affiliated** with `openagen/zeroclaw`, `zeroclaw.org` or `zeroclaw.net`. The `zeroclaw.org` and `zeroclaw.net` domains currently points to the `openagen/zeroclaw` fork, and that domain/repository are impersonating our official website/project. | Do not trust information, binaries, fundraising, or announcements from those sources. Use only [this repository](https://github.com/zeroclaw-labs/zeroclaw) and our verified social accounts. |
| 2026-02-21 | _Important_ | Our official website is now live: [zeroclawlabs.ai](https://zeroclawlabs.ai). Thanks for your patience while we prepared the launch. We are still seeing impersonation attempts, so do **not** join any investment or fundraising activity claiming the ZeroClaw name unless it is published through our official channels. | Use [this repository](https://github.com/zeroclaw-labs/zeroclaw) as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (Group)](https://www.facebook.com/groups/zeroclaw), and [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) for official updates. |
| 2026-02-21 | _Important_ | Our official website is now live: [zeroclawlabs.ai](https://zeroclawlabs.ai). Thanks for your patience while we prepared the launch. We are still seeing impersonation attempts, so do **not** join any investment or fundraising activity claiming the ZeroClaw name unless it is published through our official channels. | Use [this repository](https://github.com/zeroclaw-labs/zeroclaw) as the single source of truth. Follow [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (Group)](https://www.facebook.com/groups/zeroclawlabs), and [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) for official updates. |
| 2026-02-19 | _Important_ | Anthropic updated the Authentication and Credential Use terms on 2026-02-19. Claude Code OAuth tokens (Free, Pro, Max) are intended exclusively for Claude Code and Claude.ai; using OAuth tokens from Claude Free/Pro/Max in any other product, tool, or service (including Agent SDK) is not permitted and may violate the Consumer Terms of Service. | Please temporarily avoid Claude Code OAuth integrations to prevent potential loss. Original clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Features
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Kjør
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Med Docker
@@ -177,7 +185,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
## Fellesskap
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Gebruik deze tabel voor belangrijke aankondigingen (compatibiliteitswijzigingen,
| Datum (UTC) | Niveau | Aankondiging | Actie |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritiek_ | **We zijn niet gelieerd** met `openagen/zeroclaw` of `zeroclaw.org`. Het domein `zeroclaw.org` wijst momenteel naar de fork `openagen/zeroclaw`, en dit domein/repository imiteert onze officiële website/project. | Vertrouw geen informatie, binaire bestanden, fondsenwerving of aankondigingen van deze bronnen. Gebruik alleen [deze repository](https://github.com/zeroclaw-labs/zeroclaw) en onze geverifieerde sociale media accounts. |
| 2026-02-21 | _Belangrijk_ | Onze officiële website is nu online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bedankt voor je geduld tijdens het wachten. We detecteren nog steeds imitatiepogingen: neem niet deel aan enige investering/fondsenwerving activiteit in naam van ZeroClaw als deze niet via onze officiële kanalen wordt gepubliceerd. | Gebruik [deze repository](https://github.com/zeroclaw-labs/zeroclaw) als de enige bron van waarheid. Volg [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groep)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), en [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) voor officiële updates. |
| 2026-02-21 | _Belangrijk_ | Onze officiële website is nu online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bedankt voor je geduld tijdens het wachten. We detecteren nog steeds imitatiepogingen: neem niet deel aan enige investering/fondsenwerving activiteit in naam van ZeroClaw als deze niet via onze officiële kanalen wordt gepubliceerd. | Gebruik [deze repository](https://github.com/zeroclaw-labs/zeroclaw) als de enige bron van waarheid. Volg [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groep)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), en [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) voor officiële updates. |
| 2026-02-19 | _Belangrijk_ | Anthropic heeft de gebruiksvoorwaarden voor authenticatie en inloggegevens bijgewerkt op 2026-02-19. OAuth authenticatie (Free, Pro, Max) is exclusief voor Claude Code en Claude.ai; het gebruik van Claude Free/Pro/Max OAuth tokens in enig ander product, tool of service (inclusief Agent SDK) is niet toegestaan en kan in strijd zijn met de Consumenten Gebruiksvoorwaarden. | Vermijd tijdelijk Claude Code OAuth integraties om potentiële verliezen te voorkomen. Originele clausule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Functies
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Roteert het bestaande pairing geheim
zeroclaw tunnel start # Start een tunnel naar de lokale daemon
zeroclaw tunnel stop # Stopt de actieve tunnel
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostiek
zeroclaw doctor # Voert systeem gezondheidscontroles uit
zeroclaw version # Toont versie en build informatie
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Użyj tej tabeli dla ważnych ogłoszeń (zmiany kompatybilności, powiadomienia
| Data (UTC) | Poziom | Ogłoszenie | Działanie |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Krytyczny_ | **Nie jesteśmy powiązani** z `openagen/zeroclaw` lub `zeroclaw.org`. Domena `zeroclaw.org` obecnie wskazuje na fork `openagen/zeroclaw`, i ta domena/repozytorium podszywa się pod naszą oficjalną stronę/projekt. | Nie ufaj informacjom, plikom binarnym, zbiórkom funduszy lub ogłoszeniom z tych źródeł. Używaj tylko [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) i naszych zweryfikowanych kont społecznościowych. |
| 2026-02-21 | _Ważne_ | Nasza oficjalna strona jest teraz online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Dziękujemy za cierpliwość podczas oczekiwania. Nadal wykrywamy próby podszywania się: nie uczestnicz w żadnej działalności inwestycyjnej/finansowej w imieniu ZeroClaw jeśli nie jest opublikowana przez nasze oficjalne kanały. | Używaj [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) jako jedynego źródła prawdy. Śledź [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupa)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), i [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) dla oficjalnych aktualizacji. |
| 2026-02-21 | _Ważne_ | Nasza oficjalna strona jest teraz online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Dziękujemy za cierpliwość podczas oczekiwania. Nadal wykrywamy próby podszywania się: nie uczestnicz w żadnej działalności inwestycyjnej/finansowej w imieniu ZeroClaw jeśli nie jest opublikowana przez nasze oficjalne kanały. | Używaj [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) jako jedynego źródła prawdy. Śledź [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupa)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), i [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) dla oficjalnych aktualizacji. |
| 2026-02-19 | _Ważne_ | Anthropic zaktualizował warunki używania uwierzytelniania i poświadczeń 2026-02-19. Uwierzytelnianie OAuth (Free, Pro, Max) jest wyłącznie dla Claude Code i Claude.ai; używanie tokenów OAuth Claude Free/Pro/Max w jakimkolwiek innym produkcie, narzędziu lub usłudze (w tym Agent SDK) nie jest dozwolone i może naruszać Warunki Użytkowania Konsumenta. | Prosimy tymczasowo unikać integracji OAuth Claude Code aby zapobiec potencjalnym stratom. Oryginalna klauzula: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funkcje
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Rotuje istniejący sekret parowania
zeroclaw tunnel start # Uruchamia tunnel do lokalnego daemon
zeroclaw tunnel stop # Zatrzymuje aktywny tunnel
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostyka
zeroclaw doctor # Uruchamia sprawdzenia zdrowia systemu
zeroclaw version # Pokazuje wersję i informacje o build
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Use esta tabela para avisos importantes (mudanças de compatibilidade, avisos de
| Data (UTC) | Nível | Aviso | Ação |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Crítico_ | **Não somos afiliados** ao `openagen/zeroclaw` ou `zeroclaw.org`. O domínio `zeroclaw.org` atualmente aponta para o fork `openagen/zeroclaw`, e este domínio/repositório está falsificando nosso site/projeto oficial. | Não confie em informações, binários, arrecadações ou anúncios dessas fontes. Use apenas [este repositório](https://github.com/zeroclaw-labs/zeroclaw) e nossas contas sociais verificadas. |
| 2026-02-21 | _Importante_ | Nosso site oficial agora está online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Obrigado pela paciência durante a espera. Ainda detectamos tentativas de falsificação: não participe de nenhuma atividade de investimento/financiamento em nome do ZeroClaw se não for publicada através de nossos canais oficiais. | Use [este repositório](https://github.com/zeroclaw-labs/zeroclaw) como a única fonte de verdade. Siga [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para atualizações oficiais. |
| 2026-02-21 | _Importante_ | Nosso site oficial agora está online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Obrigado pela paciência durante a espera. Ainda detectamos tentativas de falsificação: não participe de nenhuma atividade de investimento/financiamento em nome do ZeroClaw se não for publicada através de nossos canais oficiais. | Use [este repositório](https://github.com/zeroclaw-labs/zeroclaw) como a única fonte de verdade. Siga [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para atualizações oficiais. |
| 2026-02-19 | _Importante_ | A Anthropic atualizou os termos de uso de autenticação e credenciais em 2026-02-19. A autenticação OAuth (Free, Pro, Max) é exclusivamente para Claude Code e Claude.ai; o uso de tokens OAuth do Claude Free/Pro/Max em qualquer outro produto, ferramenta ou serviço (incluindo Agent SDK) não é permitido e pode violar os Termos de Uso do Consumidor. | Por favor, evite temporariamente as integrações OAuth do Claude Code para prevenir qualquer perda potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funcionalidades
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Rotaciona o segredo de emparelhamento existente
zeroclaw tunnel start # Inicia um tunnel para o daemon local
zeroclaw tunnel stop # Para o tunnel ativo
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnóstico
zeroclaw doctor # Executa verificações de saúde do sistema
zeroclaw version # Mostra versão e informações de build
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Rulează
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Cu Docker
@@ -177,7 +185,7 @@ Vezi [LICENSE-APACHE](LICENSE-APACHE) și [LICENSE-MIT](LICENSE-MIT) pentru deta
## Comunitate
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀(Русский)</h1>
@@ -13,7 +13,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +96,7 @@
| Дата (UTC) | Уровень | Объявление | Действие |
|---|---|---|---|
| 2026-02-19 | _Срочно_ | Мы **не аффилированы** с `openagen/zeroclaw` и `zeroclaw.org`. Домен `zeroclaw.org` сейчас указывает на fork `openagen/zeroclaw`, и этот домен/репозиторий выдают себя за наш официальный сайт и проект. | Не доверяйте информации, бинарникам, сборам средств и «официальным» объявлениям из этих источников. Используйте только [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw) и наши верифицированные соцсети. |
| 2026-02-21 | _Важно_ | Наш официальный сайт уже запущен: [zeroclawlabs.ai](https://zeroclawlabs.ai). Спасибо, что дождались запуска. При этом попытки выдавать себя за ZeroClaw продолжаются, поэтому не участвуйте в инвестициях, сборах средств и похожих активностях, если они не подтверждены через наши официальные каналы. | Ориентируйтесь только на [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw); также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (группа)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-21 | _Важно_ | Наш официальный сайт уже запущен: [zeroclawlabs.ai](https://zeroclawlabs.ai). Спасибо, что дождались запуска. При этом попытки выдавать себя за ZeroClaw продолжаются, поэтому не участвуйте в инвестициях, сборах средств и похожих активностях, если они не подтверждены через наши официальные каналы. | Ориентируйтесь только на [этот репозиторий](https://github.com/zeroclaw-labs/zeroclaw); также следите за [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (группа)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) и [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) для официальных обновлений. |
| 2026-02-19 | _Важно_ | Anthropic обновил раздел Authentication and Credential Use 2026-02-19. В нем указано, что OAuth authentication (Free/Pro/Max) предназначена только для Claude Code и Claude.ai; использование OAuth-токенов, полученных через Claude Free/Pro/Max, в любых других продуктах, инструментах или сервисах (включая Agent SDK), не допускается и может считаться нарушением Consumer Terms of Service. | Чтобы избежать потерь, временно не используйте Claude Code OAuth-интеграции. Оригинал: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
## О проекте
@@ -181,6 +185,10 @@ zeroclaw agent -m "Hello, ZeroClaw!"
zeroclaw gateway
zeroclaw daemon
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
## Subscription Auth (OpenAI Codex / Claude Code)
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Kör
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Med Docker
@@ -177,7 +185,7 @@ Se [LICENSE-APACHE](LICENSE-APACHE) och [LICENSE-MIT](LICENSE-MIT) för detaljer
## Community
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Run
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### ด้วย Docker
@@ -177,7 +185,7 @@ channels:
## ชุมชน
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Gamitin ang talahanayang ito para sa mahahalagang paunawa (compatibility changes
| Petsa (UTC) | Antas | Paunawa | Aksyon |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritikal_ | **Hindi kami kaugnay** sa `openagen/zeroclaw` o `zeroclaw.org`. Ang domain na `zeroclaw.org` ay kasalukuyang tumuturo sa fork na `openagen/zeroclaw`, at ang domain/repository na ito ay nanggagaya sa aming opisyal na website/proyekto. | Huwag magtiwala sa impormasyon, binaries, fundraising, o mga anunsyo mula sa mga pinagmulang ito. Gamitin lamang [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) at aming mga verified social media accounts. |
| 2026-02-21 | _Mahalaga_ | Ang aming opisyal na website ay ngayon online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Salamat sa iyong pasensya sa panahon ng paghihintay. Nakikita pa rin namin ang mga pagtatangka ng panliliko: huwag lumahok sa anumang investment/funding activity sa ngalan ng ZeroClaw kung hindi ito nai-publish sa pamamagitan ng aming mga opisyal na channel. | Gamitin [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) bilang nag-iisang source of truth. Sundan [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), at [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para sa mga opisyal na update. |
| 2026-02-21 | _Mahalaga_ | Ang aming opisyal na website ay ngayon online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Salamat sa iyong pasensya sa panahon ng paghihintay. Nakikita pa rin namin ang mga pagtatangka ng panliliko: huwag lumahok sa anumang investment/funding activity sa ngalan ng ZeroClaw kung hindi ito nai-publish sa pamamagitan ng aming mga opisyal na channel. | Gamitin [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) bilang nag-iisang source of truth. Sundan [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), at [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para sa mga opisyal na update. |
| 2026-02-19 | _Mahalaga_ | In-update ng Anthropic ang authentication at credential use terms noong 2026-02-19. Ang OAuth authentication (Free, Pro, Max) ay eksklusibo para sa Claude Code at Claude.ai; ang paggamit ng Claude Free/Pro/Max OAuth tokens sa anumang iba pang produkto, tool, o serbisyo (kasama ang Agent SDK) ay hindi pinapayagan at maaaring lumabag sa Consumer Terms of Use. | Mangyaring pansamantalang iwasan ang Claude Code OAuth integrations upang maiwasan ang anumang potensyal na pagkawala. Orihinal na clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Mga Tampok
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Nag-rotate ng existing pairing secret
zeroclaw tunnel start # Nagse-start ng tunnel sa local daemon
zeroclaw tunnel stop # Naghihinto sa active tunnel
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Diagnostics
zeroclaw doctor # Nagpapatakbo ng system health checks
zeroclaw version # Nagpapakita ng version at build info
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -103,7 +107,7 @@ Harvard, MIT ve Sundai.Club topluluklarının öğrencileri ve üyeleri tarafın
| Tarih (UTC) | Seviye | Duyuru | Eylem |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritik_ | **`openagen/zeroclaw` veya `zeroclaw.org` ile bağlantılı değiliz.** `zeroclaw.org` alanı şu anda `openagen/zeroclaw` fork'una işaret ediyor ve bu alan/depo taklitçiliğini yapıyor. | Bu kaynaklardan bilgi, ikili dosyalar, bağış toplama veya duyurulara güvenmeyin. Sadece [bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) ve doğrulanmış sosyal medya hesaplarımızı kullanın. |
| 2026-02-21 | _Önemli_ | Resmi web sitemiz artık çevrimiçi: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bekleme sürecinde sabırlarınız için teşekkürler. Hala taklit girişimleri tespit ediyoruz: ZeroClaw adına resmi kanallarımız aracılığıyla yayınlanmayan herhangi bir yatırım/bağış faaliyetine katılmayın. | [Bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) tek doğruluk kaynağı olarak kullanın. Resmi güncellemeler için [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grup)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) ve [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)'u takip edin. |
| 2026-02-21 | _Önemli_ | Resmi web sitemiz artık çevrimiçi: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bekleme sürecinde sabırlarınız için teşekkürler. Hala taklit girişimleri tespit ediyoruz: ZeroClaw adına resmi kanallarımız aracılığıyla yayınlanmayan herhangi bir yatırım/bağış faaliyetine katılmayın. | [Bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) tek doğruluk kaynağı olarak kullanın. Resmi güncellemeler için [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grup)](https://www.facebook.com/groups/zeroclawlabs), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) ve [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)'u takip edin. |
| 2026-02-19 | _Önemli_ | Anthropic, 2026-02-19 tarihinde kimlik doğrulama ve kimlik bilgileri kullanım şartlarını güncelledi. OAuth kimlik doğrulaması (Free, Pro, Max) yalnızca Claude Code ve Claude.ai içindir; Claude Free/Pro/Max OAuth belirteçlerini başka herhangi bir ürün, araç veya hizmette (Agent SDK dahil) kullanmak yasaktır ve Tüketici Kullanım Şartlarını ihlal edebilir. | Olası kayıpları önlemek için lütfen geçici olarak Claude Code OAuth entegrasyonlarından kaçının. Orijinal madde: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Özellikler
@@ -366,6 +370,10 @@ zeroclaw pairing rotate # Mevcut eşleştirme sırrını döndürür
zeroclaw tunnel start # Yerel arka plan programına bir tünel başlatır
zeroclaw tunnel stop # Aktif tüneli durdurur
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
# Teşhis
zeroclaw doctor # Sistem sağlık kontrollerini çalıştırır
zeroclaw version # Sürüm ve derleme bilgilerini gösterir
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center">
@@ -101,6 +105,10 @@ cargo build --release
# Запустіть
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### З Docker
@@ -177,7 +185,7 @@ channels:
## Спільнота
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -17,7 +17,11 @@
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
</p>
<p align="center" dir="rtl">
@@ -107,6 +111,10 @@ cargo build --release
# چلائیں
cargo run --release
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
### Docker کے ساتھ
@@ -193,7 +201,7 @@ channels:
## کمیونٹی
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [Facebook Group](https://www.facebook.com/groups/zeroclawlabs)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
+7 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -14,7 +14,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
@@ -101,7 +105,7 @@ Bảng này dành cho các thông báo quan trọng (thay đổi không tương
| Ngày (UTC) | Mức độ | Thông báo | Hành động |
|---|---|---|---|
| 2026-02-19 | _Nghiêm trọng_ | Chúng tôi **không có liên kết** với `openagen/zeroclaw` hoặc `zeroclaw.org`. Tên miền `zeroclaw.org` hiện đang trỏ đến fork `openagen/zeroclaw`, và tên miền/repository đó đang mạo danh website/dự án chính thức của chúng tôi. | Không tin tưởng thông tin, binary, gây quỹ, hay thông báo từ các nguồn đó. Chỉ sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) và các tài khoản mạng xã hội đã được xác minh của chúng tôi. |
| 2026-02-21 | _Quan trọng_ | Website chính thức của chúng tôi đã ra mắt: [zeroclawlabs.ai](https://zeroclawlabs.ai). Cảm ơn mọi người đã kiên nhẫn chờ đợi. Chúng tôi vẫn đang ghi nhận các nỗ lực mạo danh, vì vậy **không** tham gia bất kỳ hoạt động đầu tư hoặc gây quỹ nào nhân danh ZeroClaw nếu thông tin đó không được công bố qua các kênh chính thức của chúng tôi. | Sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) làm nguồn thông tin duy nhất đáng tin cậy. Theo dõi [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (nhóm)](https://www.facebook.com/groups/zeroclaw), và [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) để nhận cập nhật chính thức. |
| 2026-02-21 | _Quan trọng_ | Website chính thức của chúng tôi đã ra mắt: [zeroclawlabs.ai](https://zeroclawlabs.ai). Cảm ơn mọi người đã kiên nhẫn chờ đợi. Chúng tôi vẫn đang ghi nhận các nỗ lực mạo danh, vì vậy **không** tham gia bất kỳ hoạt động đầu tư hoặc gây quỹ nào nhân danh ZeroClaw nếu thông tin đó không được công bố qua các kênh chính thức của chúng tôi. | Sử dụng [repository này](https://github.com/zeroclaw-labs/zeroclaw) làm nguồn thông tin duy nhất đáng tin cậy. Theo dõi [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (nhóm)](https://www.facebook.com/groups/zeroclawlabs), và [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) để nhận cập nhật chính thức. |
| 2026-02-19 | _Quan trọng_ | Anthropic đã cập nhật điều khoản Xác thực và Sử dụng Thông tin xác thực vào ngày 2026-02-19. Xác thực OAuth (Free, Pro, Max) được dành riêng cho Claude Code và Claude.ai; việc sử dụng OAuth token từ Claude Free/Pro/Max trong bất kỳ sản phẩm, công cụ hay dịch vụ nào khác (bao gồm Agent SDK) đều không được phép và có thể vi phạm Điều khoản Dịch vụ cho Người tiêu dùng. | Vui lòng tạm thời tránh tích hợp Claude Code OAuth để ngăn ngừa khả năng mất mát. Điều khoản gốc: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Tính năng
+11 -3
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/docs/assets/zeroclaw-banner.png" alt="ZeroClaw" width="600" />
</p>
<h1 align="center">ZeroClaw 🦀(简体中文)</h1>
@@ -13,7 +13,11 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.facebook.com/groups/zeroclawlabs"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://discord.com/invite/wDshRVqRjx"><img src="https://img.shields.io/badge/Discord-Join-5865F2?style=flat&logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://www.instagram.com/therealzeroclaw"><img src="https://img.shields.io/badge/Instagram-%40therealzeroclaw-E4405F?style=flat&logo=instagram&logoColor=white" alt="Instagram: @therealzeroclaw" /></a>
<a href="https://www.tiktok.com/@zeroclawlabs"><img src="https://img.shields.io/badge/TikTok-%40zeroclawlabs-000000?style=flat&logo=tiktok&logoColor=white" alt="TikTok: @zeroclawlabs" /></a>
<a href="https://www.rednote.com/user/profile/69b735e6000000002603927e"><img src="https://img.shields.io/badge/RedNote-Official-FF2442?style=flat" alt="RedNote" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
@@ -92,7 +96,7 @@
| 日期(UTC) | 级别 | 通知 | 处理建议 |
|---|---|---|---|
| 2026-02-19 | _紧急_ | 我们与 `openagen/zeroclaw``zeroclaw.org` **没有任何关系**`zeroclaw.org` 当前会指向 `openagen/zeroclaw` 这个 fork,并且该域名/仓库正在冒充我们的官网与官方项目。 | 请不要相信上述来源发布的任何信息、二进制、募资活动或官方声明。请仅以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)和已验证官方社媒为准。 |
| 2026-02-21 | _重要_ | 我们的官网现已上线:[zeroclawlabs.ai](https://zeroclawlabs.ai)。感谢大家一直以来的耐心等待。我们仍在持续发现冒充行为,请勿参与任何未经我们官方渠道发布、但打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(群组)](https://www.facebook.com/groups/zeroclaw)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-21 | _重要_ | 我们的官网现已上线:[zeroclawlabs.ai](https://zeroclawlabs.ai)。感谢大家一直以来的耐心等待。我们仍在持续发现冒充行为,请勿参与任何未经我们官方渠道发布、但打着 ZeroClaw 名义进行的投资、募资或类似活动。 | 一切信息请以[本仓库](https://github.com/zeroclaw-labs/zeroclaw)为准;也可关注 [X@zeroclawlabs](https://x.com/zeroclawlabs?s=21)、[Telegram@zeroclawlabs](https://t.me/zeroclawlabs)、[Facebook(群组)](https://www.facebook.com/groups/zeroclawlabs)、[Redditr/zeroclawlabs](https://www.reddit.com/r/zeroclawlabs/) 与 [小红书账号](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) 获取官方最新动态。 |
| 2026-02-19 | _重要_ | Anthropic 于 2026-02-19 更新了 Authentication and Credential Use 条款。条款明确:OAuth authentication(用于 Free、Pro、Max)仅适用于 Claude Code 与 Claude.ai;将 Claude Free/Pro/Max 账号获得的 OAuth token 用于其他任何产品、工具或服务(包括 Agent SDK)不被允许,并可能构成对 Consumer Terms of Service 的违规。 | 为避免损失,请暂时不要尝试 Claude Code OAuth 集成;原文见:[Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use)。 |
## 项目简介
@@ -186,6 +190,10 @@ zeroclaw gateway
# 启动长期运行模式
zeroclaw daemon
# Migrate from OpenClaw
zeroclaw migrate openclaw --dry-run
zeroclaw migrate openclaw
```
## Subscription AuthOpenAI Codex / Claude Code
+73 -2
View File
@@ -1,22 +1,31 @@
use std::fs;
use std::path::Path;
use std::process::Command;
use std::time::SystemTime;
fn main() {
let dist_dir = Path::new("web/dist");
let web_dir = Path::new("web");
// Tell Cargo to re-run this script when web source files change.
// Tell Cargo to re-run this script when web sources or bundled assets change.
println!("cargo:rerun-if-changed=web/src");
println!("cargo:rerun-if-changed=web/public");
println!("cargo:rerun-if-changed=web/index.html");
println!("cargo:rerun-if-changed=docs/assets/zeroclaw-trans.png");
println!("cargo:rerun-if-changed=web/package.json");
println!("cargo:rerun-if-changed=web/package-lock.json");
println!("cargo:rerun-if-changed=web/tsconfig.json");
println!("cargo:rerun-if-changed=web/tsconfig.app.json");
println!("cargo:rerun-if-changed=web/tsconfig.node.json");
println!("cargo:rerun-if-changed=web/vite.config.ts");
println!("cargo:rerun-if-changed=web/dist");
// Attempt to build the web frontend if npm is available and web/dist is
// missing or stale. The build is best-effort: when Node.js is not
// installed (e.g. CI containers, cross-compilation, minimal dev setups)
// we fall back to the existing stub/empty dist directory so the Rust
// build still succeeds.
let needs_build = !dist_dir.join("index.html").exists();
let needs_build = web_build_required(web_dir, dist_dir);
if needs_build && web_dir.join("package.json").exists() {
if let Ok(npm) = which_npm() {
@@ -75,6 +84,50 @@ fn main() {
}
ensure_dist_dir(dist_dir);
ensure_dashboard_assets(dist_dir);
}
fn web_build_required(web_dir: &Path, dist_dir: &Path) -> bool {
let Some(dist_mtime) = latest_modified(dist_dir) else {
return true;
};
[
web_dir.join("src"),
web_dir.join("public"),
web_dir.join("index.html"),
web_dir.join("package.json"),
web_dir.join("package-lock.json"),
web_dir.join("tsconfig.json"),
web_dir.join("tsconfig.app.json"),
web_dir.join("tsconfig.node.json"),
web_dir.join("vite.config.ts"),
]
.into_iter()
.filter_map(|path| latest_modified(&path))
.any(|mtime| mtime > dist_mtime)
}
fn latest_modified(path: &Path) -> Option<SystemTime> {
let metadata = fs::metadata(path).ok()?;
if metadata.is_file() {
return metadata.modified().ok();
}
if !metadata.is_dir() {
return None;
}
let mut latest = metadata.modified().ok();
let entries = fs::read_dir(path).ok()?;
for entry in entries.flatten() {
if let Some(child_mtime) = latest_modified(&entry.path()) {
latest = Some(match latest {
Some(current) if current >= child_mtime => current,
_ => child_mtime,
});
}
}
latest
}
/// Ensure the dist directory exists so `rust-embed` does not fail at compile
@@ -85,6 +138,24 @@ fn ensure_dist_dir(dist_dir: &Path) {
}
}
fn ensure_dashboard_assets(dist_dir: &Path) {
// The Rust gateway serves `web/dist/` via rust-embed under `/_app/*`.
// Some builds may end up with missing/blank logo assets, so we ensure the
// expected image is always present in `web/dist/` at compile time.
let src = Path::new("docs/assets/zeroclaw-trans.png");
if !src.exists() {
eprintln!(
"cargo:warning=docs/assets/zeroclaw-trans.png not found; skipping dashboard asset copy"
);
return;
}
let dst = dist_dir.join("zeroclaw-trans.png");
if let Err(e) = fs::copy(src, &dst) {
eprintln!("cargo:warning=Failed to copy zeroclaw-trans.png into web/dist/: {e}");
}
}
/// Locate the `npm` binary on the system PATH.
fn which_npm() -> Result<String, ()> {
let cmd = if cfg!(target_os = "windows") {
+7
View File
@@ -12,6 +12,13 @@ ignore = [
# bincode v2.0.1 via probe-rs — project ceased but 1.3.3 considered complete
"RUSTSEC-2025-0141",
{ id = "RUSTSEC-2024-0384", reason = "Reported to `rust-nostr/nostr` and it's WIP" },
{ id = "RUSTSEC-2024-0388", reason = "derivative via extism → wasmtime transitive dep" },
{ id = "RUSTSEC-2025-0057", reason = "fxhash via extism → wasmtime transitive dep" },
{ id = "RUSTSEC-2025-0119", reason = "number_prefix via indicatif — cosmetic dep" },
# wasmtime vulns via extism 1.13.0 — no upstream fix yet; plugins feature-gated
{ id = "RUSTSEC-2026-0006", reason = "wasmtime segfault via extism; awaiting extism upgrade" },
{ id = "RUSTSEC-2026-0020", reason = "WASI resource exhaustion via extism; awaiting extism upgrade" },
{ id = "RUSTSEC-2026-0021", reason = "WASI http fields panic via extism; awaiting extism upgrade" },
]
[licenses]
+16
View File
@@ -0,0 +1,16 @@
pkgbase = zeroclaw
pkgdesc = Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.
pkgver = 0.4.3
pkgrel = 1
url = https://github.com/zeroclaw-labs/zeroclaw
arch = x86_64
license = MIT
license = Apache-2.0
makedepends = cargo
makedepends = git
depends = gcc-libs
depends = openssl
source = zeroclaw-0.4.3.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v0.4.3.tar.gz
sha256sums = SKIP
pkgname = zeroclaw
+32
View File
@@ -0,0 +1,32 @@
# Maintainer: zeroclaw-labs <bot@zeroclaw.dev>
pkgname=zeroclaw
pkgver=0.4.3
pkgrel=1
pkgdesc="Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant."
arch=('x86_64')
url="https://github.com/zeroclaw-labs/zeroclaw"
license=('MIT' 'Apache-2.0')
depends=('gcc-libs' 'openssl')
makedepends=('cargo' 'git')
source=("${pkgname}-${pkgver}.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v${pkgver}.tar.gz")
sha256sums=('SKIP')
prepare() {
cd "${pkgname}-${pkgver}"
export RUSTUP_TOOLCHAIN=stable
cargo fetch --locked --target "$(rustc -vV | sed -n 's/host: //p')"
}
build() {
cd "${pkgname}-${pkgver}"
export RUSTUP_TOOLCHAIN=stable
export CARGO_TARGET_DIR=target
cargo build --frozen --release --profile dist
}
package() {
cd "${pkgname}-${pkgver}"
install -Dm0755 -t "${pkgdir}/usr/bin/" "target/dist/zeroclaw"
install -Dm0644 LICENSE-MIT "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE-MIT"
install -Dm0644 LICENSE-APACHE "${pkgdir}/usr/share/licenses/${pkgname}/LICENSE-APACHE"
}
+27
View File
@@ -0,0 +1,27 @@
{
"version": "0.4.3",
"description": "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.",
"homepage": "https://github.com/zeroclaw-labs/zeroclaw",
"license": "MIT|Apache-2.0",
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v0.4.3/zeroclaw-x86_64-pc-windows-msvc.zip",
"hash": "",
"bin": "zeroclaw.exe"
}
},
"checkver": {
"github": "https://github.com/zeroclaw-labs/zeroclaw"
},
"autoupdate": {
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v$version/zeroclaw-x86_64-pc-windows-msvc.zip"
}
},
"hash": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v$version/SHA256SUMS",
"regex": "([a-f0-9]{64})\\s+zeroclaw-x86_64-pc-windows-msvc\\.zip"
}
}
}
+6 -3
View File
@@ -10,6 +10,9 @@
services:
zeroclaw:
image: ghcr.io/zeroclaw-labs/zeroclaw:latest
# For ARM64 environments where the distroless image exits immediately,
# switch to the Debian compatibility image instead:
# image: ghcr.io/zeroclaw-labs/zeroclaw:debian
# Or build locally (distroless, no shell):
# build: .
# Or build the Debian variant (includes bash, git, curl):
@@ -50,15 +53,15 @@ services:
resources:
limits:
cpus: '2'
memory: 2G
memory: 512M
reservations:
cpus: '0.5'
memory: 512M
memory: 32M
# Health check — uses lightweight status instead of full diagnostics.
# For images with curl, prefer: curl -f http://localhost:42617/health
healthcheck:
test: ["CMD", "zeroclaw", "status"]
test: ["CMD", "zeroclaw", "status", "--format=exit-code"]
interval: 60s
timeout: 10s
retries: 3
Binary file not shown.

After

Width:  |  Height:  |  Size: 851 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

+16 -6
View File
@@ -37,6 +37,12 @@ Merge-blocking checks should stay small and deterministic. Optional checks are u
- `.github/workflows/pub-homebrew-core.yml` (`Pub Homebrew Core`)
- Purpose: manual, bot-owned Homebrew core formula bump PR flow for tagged releases
- Guardrail: release tag must match `Cargo.toml` version
- `.github/workflows/pub-scoop.yml` (`Pub Scoop Manifest`)
- Purpose: Scoop bucket manifest update for Windows; auto-called by stable release, also manual dispatch
- Guardrail: release tag must be `vX.Y.Z` format; Windows binary hash extracted from `SHA256SUMS`
- `.github/workflows/pub-aur.yml` (`Pub AUR Package`)
- Purpose: AUR PKGBUILD push for Arch Linux; auto-called by stable release, also manual dispatch
- Guardrail: release tag must be `vX.Y.Z` format; source tarball SHA256 computed at publish time
- `.github/workflows/pr-label-policy-check.yml` (`Label Policy Sanity`)
- Purpose: validate shared contributor-tier policy in `.github/label-policy.json` and ensure label workflows consume that policy
- `.github/workflows/test-rust-build.yml` (`Rust Reusable Job`)
@@ -75,6 +81,8 @@ Merge-blocking checks should stay small and deterministic. Optional checks are u
- `Docker`: tag push (`v*`) for publish, matching PRs to `master` for smoke build, manual dispatch for smoke only
- `Release`: tag push (`v*`), weekly schedule (verification-only), manual dispatch (verification or publish)
- `Pub Homebrew Core`: manual dispatch only
- `Pub Scoop Manifest`: auto-called by stable release, also manual dispatch
- `Pub AUR Package`: auto-called by stable release, also manual dispatch
- `Security Audit`: push to `master`, PRs to `master`, weekly schedule
- `Sec Vorpal Reviewdog`: manual dispatch only
- `Workflow Sanity`: PR/push when `.github/workflows/**`, `.github/*.yml`, or `.github/*.yaml` change
@@ -92,12 +100,14 @@ Merge-blocking checks should stay small and deterministic. Optional checks are u
2. Docker failures on PRs: inspect `.github/workflows/pub-docker-img.yml` `pr-smoke` job.
3. Release failures (tag/manual/scheduled): inspect `.github/workflows/pub-release.yml` and the `prepare` job outputs.
4. Homebrew formula publish failures: inspect `.github/workflows/pub-homebrew-core.yml` summary output and bot token/fork variables.
5. Security failures: inspect `.github/workflows/sec-audit.yml` and `deny.toml`.
6. Workflow syntax/lint failures: inspect `.github/workflows/workflow-sanity.yml`.
7. PR intake failures: inspect `.github/workflows/pr-intake-checks.yml` sticky comment and run logs.
8. Label policy parity failures: inspect `.github/workflows/pr-label-policy-check.yml`.
9. Docs failures in CI: inspect `docs-quality` job logs in `.github/workflows/ci-run.yml`.
10. Strict delta lint failures in CI: inspect `lint-strict-delta` job logs and compare with `BASE_SHA` diff scope.
5. Scoop manifest publish failures: inspect `.github/workflows/pub-scoop.yml` summary output and `SCOOP_BUCKET_REPO`/`SCOOP_BUCKET_TOKEN` settings.
6. AUR package publish failures: inspect `.github/workflows/pub-aur.yml` summary output and `AUR_SSH_KEY` secret.
7. Security failures: inspect `.github/workflows/sec-audit.yml` and `deny.toml`.
8. Workflow syntax/lint failures: inspect `.github/workflows/workflow-sanity.yml`.
9. PR intake failures: inspect `.github/workflows/pr-intake-checks.yml` sticky comment and run logs.
10. Label policy parity failures: inspect `.github/workflows/pr-label-policy-check.yml`.
11. Docs failures in CI: inspect `docs-quality` job logs in `.github/workflows/ci-run.yml`.
12. Strict delta lint failures in CI: inspect `lint-strict-delta` job logs and compare with `BASE_SHA` diff scope.
## Maintenance Rules
+37
View File
@@ -23,6 +23,8 @@ Release automation lives in:
- `.github/workflows/pub-release.yml`
- `.github/workflows/pub-homebrew-core.yml` (manual Homebrew formula PR, bot-owned)
- `.github/workflows/pub-scoop.yml` (manual Scoop bucket manifest update)
- `.github/workflows/pub-aur.yml` (manual AUR PKGBUILD push)
Modes:
@@ -115,6 +117,41 @@ Workflow guardrails:
- formula license is normalized to `Apache-2.0 OR MIT`
- PR is opened from the bot fork into `Homebrew/homebrew-core:master`
### 7) Publish Scoop manifest (Windows)
Run `Pub Scoop Manifest` manually:
- `release_tag`: `vX.Y.Z`
- `dry_run`: `true` first, then `false`
Required repository settings for non-dry-run:
- secret: `SCOOP_BUCKET_TOKEN` (PAT with push access to the bucket repo)
- variable: `SCOOP_BUCKET_REPO` (for example `zeroclaw-labs/scoop-zeroclaw`)
Workflow guardrails:
- release tag must be `vX.Y.Z` format
- Windows binary SHA256 extracted from `SHA256SUMS` release asset
- manifest pushed to `bucket/zeroclaw.json` in the Scoop bucket repo
### 8) Publish AUR package (Arch Linux)
Run `Pub AUR Package` manually:
- `release_tag`: `vX.Y.Z`
- `dry_run`: `true` first, then `false`
Required repository settings for non-dry-run:
- secret: `AUR_SSH_KEY` (SSH private key registered with AUR)
Workflow guardrails:
- release tag must be `vX.Y.Z` format
- source tarball SHA256 computed from the tagged release
- PKGBUILD and .SRCINFO pushed to AUR `zeroclaw` package
## Emergency / Recovery Path
If tag-push release fails after artifacts are validated:
+1 -1
View File
@@ -31,7 +31,7 @@ Build with `--features hardware` to include Uno Q support.
### 1.1 Configure Uno Q via App Lab
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (AppImage on Linux).
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (tar.gz on Linux).
2. Connect Uno Q via USB, power it on.
3. Open App Lab, connect to the board.
4. Follow the setup wizard:
@@ -76,7 +76,7 @@ runtime_trace_max_entries = 200
| 键 | 默认值 | 用途 |
|---|---|---|
| `compact_context` | `false` | 为 true 时:bootstrap_max_chars=6000rag_chunk_limit=2。适用于 13B 或更小的模型 |
| `compact_context` | `true` | 为 true 时:bootstrap_max_chars=6000rag_chunk_limit=2。适用于 13B 或更小的模型 |
| `max_tool_iterations` | `10` | 跨 CLI、网关和渠道的每条用户消息的最大工具调用循环轮次 |
| `max_history_messages` | `50` | 每个会话保留的最大对话历史消息数 |
| `parallel_tools` | `false` | 在单次迭代中启用并行工具执行 |
+73
View File
@@ -0,0 +1,73 @@
# OpenAI Temperature Compatibility Reference
This document provides empirical evidence for temperature parameter compatibility across OpenAI models.
## Summary
Different OpenAI model families have different temperature requirements:
- **Reasoning models** (o-series, gpt-5 base variants): Only accept `temperature=1.0`
- **Search models**: Do not accept temperature parameter (must be omitted)
- **Standard models** (gpt-3.5, gpt-4, gpt-4o): Accept flexible temperature values (0.0-2.0)
## Tested Models
### Models Requiring temperature=1.0
| Model | Accepts 0.7 | Accepts 1.0 | Recommendation |
|-------|-------------|-------------|----------------|
| o1 | ❌ | ✅ | USE_1.0 |
| o1-2024-12-17 | ❌ | ✅ | USE_1.0 |
| o3 | ❌ | ✅ | USE_1.0 |
| o3-2025-04-16 | ❌ | ✅ | USE_1.0 |
| o3-mini | ❌ | ✅ | USE_1.0 |
| o3-mini-2025-01-31 | ❌ | ✅ | USE_1.0 |
| o4-mini | ❌ | ✅ | USE_1.0 |
| o4-mini-2025-04-16 | ❌ | ✅ | USE_1.0 |
| gpt-5 | ❌ | ✅ | USE_1.0 |
| gpt-5-2025-08-07 | ❌ | ✅ | USE_1.0 |
| gpt-5-mini | ❌ | ✅ | USE_1.0 |
| gpt-5-mini-2025-08-07 | ❌ | ✅ | USE_1.0 |
| gpt-5-nano | ❌ | ✅ | USE_1.0 |
| gpt-5-nano-2025-08-07 | ❌ | ✅ | USE_1.0 |
| gpt-5.1-chat-latest | ❌ | ✅ | USE_1.0 |
| gpt-5.2-chat-latest | ❌ | ✅ | USE_1.0 |
| gpt-5.3-chat-latest | ❌ | ✅ | USE_1.0 |
### Models Accepting Flexible Temperature (0.7 works)
All standard GPT models accept flexible temperature values:
- gpt-3.5-turbo (all variants)
- gpt-4 (all variants)
- gpt-4-turbo (all variants)
- gpt-4o (all variants)
- gpt-4o-mini (all variants)
- gpt-4.1 (all variants)
- gpt-5-chat-latest
- gpt-5.2, gpt-5.2-2025-12-11
- gpt-5.4, gpt-5.4-2026-03-05
### Models Requiring Temperature Omission
Search-preview models do not accept temperature parameter:
- gpt-4o-mini-search-preview
- gpt-4o-search-preview
- gpt-5-search-api
## Implementation
The `adjust_temperature_for_model()` function in `src/providers/openai.rs` automatically adjusts temperature to 1.0 for reasoning models while preserving user-specified values for standard models.
## Testing Methodology
Models were tested with:
1. No temperature parameter (baseline)
2. temperature=0.7 (common default)
3. temperature=1.0 (reasoning model requirement)
Results were validated against actual OpenAI API responses.
## References
- OpenAI API Documentation: https://platform.openai.com/docs/api-reference/chat
- Related Issue: Temperature errors with o1/o3/gpt-5 models
+58
View File
@@ -22,6 +22,64 @@ For first-time installation, start from [one-click-bootstrap.md](../setup-guides
| Foreground runtime | `zeroclaw daemon` | local debugging, short-lived sessions |
| Foreground gateway only | `zeroclaw gateway` | webhook endpoint testing |
| User service | `zeroclaw service install && zeroclaw service start` | persistent operator-managed runtime |
| Docker / Podman | `docker compose up -d` | containerized deployment |
## Docker / Podman Runtime
If you installed via `./install.sh --docker`, the container exits after onboarding. To run
ZeroClaw as a long-lived container, use the repository `docker-compose.yml` or start a
container manually against the persisted data directory.
### Recommended: docker-compose
```bash
# Start (detached, auto-restarts on reboot)
docker compose up -d
# Stop
docker compose down
# Restart
docker compose up -d
```
Replace `docker` with `podman` if using Podman.
### Manual container lifecycle
```bash
# Start a new container from the bootstrap image
docker run -d --name zeroclaw \
--restart unless-stopped \
-v "$PWD/.zeroclaw-docker/.zeroclaw:/zeroclaw-data/.zeroclaw" \
-v "$PWD/.zeroclaw-docker/workspace:/zeroclaw-data/workspace" \
-e HOME=/zeroclaw-data \
-e ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace \
-p 42617:42617 \
zeroclaw-bootstrap:local \
gateway
# Stop (preserves config and workspace)
docker stop zeroclaw
# Restart a stopped container
docker start zeroclaw
# View logs
docker logs -f zeroclaw
# Health check
docker exec zeroclaw zeroclaw status
```
For Podman, add `--userns keep-id --user "$(id -u):$(id -g)"` and append `:Z` to volume mounts.
### Key detail: do not re-run install.sh to restart
Re-running `install.sh --docker` rebuilds the image and re-runs onboarding. To simply
restart, use `docker start`, `docker compose up -d`, or `podman start`.
For full setup instructions, see [one-click-bootstrap.md](../setup-guides/one-click-bootstrap.md#stopping-and-restarting-a-dockerpodman-container).
## Baseline Operator Checklist
+5 -1
View File
@@ -76,7 +76,7 @@ Operational note for container users:
| Key | Default | Purpose |
|---|---|---|
| `compact_context` | `false` | When true: bootstrap_max_chars=6000, rag_chunk_limit=2. Use for 13B or smaller models |
| `compact_context` | `true` | When true: bootstrap_max_chars=6000, rag_chunk_limit=2. Use for 13B or smaller models |
| `max_tool_iterations` | `10` | Maximum tool-call loop turns per user message across CLI, gateway, and channels |
| `max_history_messages` | `50` | Maximum conversation history messages retained per session |
| `parallel_tools` | `false` | Enable parallel tool execution within a single iteration |
@@ -183,6 +183,8 @@ Delegate sub-agent configurations. Each key under `[agents]` defines a named sub
| `agentic` | `false` | Enable multi-turn tool-call loop mode for the sub-agent |
| `allowed_tools` | `[]` | Tool allowlist for agentic mode |
| `max_iterations` | `10` | Max tool-call iterations for agentic mode |
| `timeout_secs` | `120` | Timeout in seconds for non-agentic provider calls (13600) |
| `agentic_timeout_secs` | `300` | Timeout in seconds for agentic sub-agent loops (13600) |
Notes:
@@ -199,11 +201,13 @@ max_depth = 2
agentic = true
allowed_tools = ["web_search", "http_request", "file_read"]
max_iterations = 8
agentic_timeout_secs = 600
[agents.coder]
provider = "ollama"
model = "qwen2.5-coder:32b"
temperature = 0.2
timeout_secs = 60
```
## `[runtime]`
+97
View File
@@ -98,6 +98,103 @@ If you add `--skip-build`, the installer skips local image build. It first tries
Docker tag (`ZEROCLAW_DOCKER_IMAGE`, default: `zeroclaw-bootstrap:local`); if missing,
it pulls `ghcr.io/zeroclaw-labs/zeroclaw:latest` and tags it locally before running.
### Stopping and restarting a Docker/Podman container
After `./install.sh --docker` finishes, the container exits. Your config and workspace
are persisted in the data directory (default: `./.zeroclaw-docker`, or `~/.zeroclaw-docker`
when bootstrapping via `curl | bash`). You can override this path with `ZEROCLAW_DOCKER_DATA_DIR`.
**Do not re-run `install.sh`** to restart -- it will rebuild the image and re-run onboarding.
Instead, start a new container from the existing image and mount the persisted data directory.
#### Using the repository docker-compose.yml
The simplest way to run ZeroClaw long-term in Docker/Podman is with the provided
`docker-compose.yml` at the repository root. It uses a named volume (`zeroclaw-data`)
and sets `restart: unless-stopped` so the container survives reboots.
```bash
# Start (detached)
docker compose up -d
# Stop
docker compose down
# Restart after stopping
docker compose up -d
```
Replace `docker` with `podman` if you use Podman.
#### Manual container run (using install.sh data directory)
If you installed via `./install.sh --docker` and want to reuse the `.zeroclaw-docker`
data directory without compose:
```bash
# Docker
docker run -d --name zeroclaw \
--restart unless-stopped \
-v "$PWD/.zeroclaw-docker/.zeroclaw:/zeroclaw-data/.zeroclaw" \
-v "$PWD/.zeroclaw-docker/workspace:/zeroclaw-data/workspace" \
-e HOME=/zeroclaw-data \
-e ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace \
-p 42617:42617 \
zeroclaw-bootstrap:local \
gateway
# Podman (add --userns keep-id and :Z volume labels)
podman run -d --name zeroclaw \
--restart unless-stopped \
--userns keep-id \
--user "$(id -u):$(id -g)" \
-v "$PWD/.zeroclaw-docker/.zeroclaw:/zeroclaw-data/.zeroclaw:Z" \
-v "$PWD/.zeroclaw-docker/workspace:/zeroclaw-data/workspace:Z" \
-e HOME=/zeroclaw-data \
-e ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace \
-p 42617:42617 \
zeroclaw-bootstrap:local \
gateway
```
#### Common lifecycle commands
```bash
# Stop the container (preserves data)
docker stop zeroclaw
# Start a stopped container (config and workspace are intact)
docker start zeroclaw
# View logs
docker logs -f zeroclaw
# Remove the container (data in volumes/.zeroclaw-docker is preserved)
docker rm zeroclaw
# Check health
docker exec zeroclaw zeroclaw status
```
#### Environment variables
When running manually, pass provider configuration as environment variables
or ensure they are already saved in the persisted `config.toml`:
```bash
docker run -d --name zeroclaw \
-e API_KEY="sk-..." \
-e PROVIDER="openrouter" \
-v "$PWD/.zeroclaw-docker/.zeroclaw:/zeroclaw-data/.zeroclaw" \
-v "$PWD/.zeroclaw-docker/workspace:/zeroclaw-data/workspace" \
-p 42617:42617 \
zeroclaw-bootstrap:local \
gateway
```
If you already ran `onboard` during the initial install, your API key and provider are
saved in `.zeroclaw-docker/.zeroclaw/config.toml` and do not need to be passed again.
### Quick onboarding (non-interactive)
```bash
@@ -0,0 +1,314 @@
# LinkedIn Tool — Design Spec
**Date:** 2026-03-13
**Status:** Approved
**Risk tier:** Medium (new tool, external API, credential handling)
## Summary
Native LinkedIn integration tool for ZeroClaw. Enables the agent to create posts,
list its own posts, comment, react, delete posts, view post engagement, and retrieve
profile info — all through LinkedIn's official REST API with OAuth2 authentication.
## Motivation
Enable ZeroClaw to autonomously publish LinkedIn content on a schedule (via cron),
drawing from the user's memory, project history, and Medium feed. Removes dependency
on third-party platforms like Composio for social media posting.
## Required OAuth2 scopes
Users must grant these scopes when creating their LinkedIn Developer App:
| Scope | Required for |
|---|---|
| `w_member_social` | `create_post`, `comment`, `react`, `delete_post` |
| `r_liteprofile` | `get_profile` |
| `r_member_social` | `list_posts`, `get_engagement` |
The "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect" products
must be requested in the LinkedIn Developer App dashboard (both auto-approve).
## Architecture
### File structure
| File | Role |
|---|---|
| `src/tools/linkedin.rs` | `Tool` trait impl, action dispatch, parameter validation |
| `src/tools/linkedin_client.rs` | OAuth2 token management, LinkedIn REST API wrappers |
| `src/tools/mod.rs` | Module declaration, pub use, registration in `all_tools_with_runtime` |
| `src/config/schema.rs` | `[linkedin]` config section (`LinkedInConfig`) |
| `src/config/mod.rs` | Add `LinkedInConfig` to pub use exports |
### No new dependencies
All required crates are already in `Cargo.toml`: `reqwest` (HTTP), `serde`/`serde_json`
(serialization), `chrono` (timestamps), `tokio` (async fs for .env reading).
## Config
### `config.toml`
```toml
[linkedin]
enabled = false
```
### `.env` credentials
```bash
LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_ACCESS_TOKEN=your_access_token
LINKEDIN_REFRESH_TOKEN=your_refresh_token
LINKEDIN_PERSON_ID=your_person_urn_id
```
Token format: `LINKEDIN_PERSON_ID` is the bare ID (e.g., `dXNlcjpA...`), not the
full URN. The client prefixes `urn:li:person:` internally.
## Tool design
### Single tool, action-dispatched
Tool name: `linkedin`
The LLM calls it with an `action` field and action-specific parameters:
```json
{ "action": "create_post", "text": "...", "visibility": "PUBLIC" }
```
### Actions
| Action | Params | API | Write? |
|---|---|---|---|
| `create_post` | `text`, `visibility?` (PUBLIC/CONNECTIONS, default PUBLIC), `article_url?`, `article_title?` | `POST /rest/posts` | Yes |
| `list_posts` | `count?` (default 10, max 50) | `GET /rest/posts?author={personUrn}&q=author` | No |
| `comment` | `post_id`, `text` | `POST /rest/socialActions/{id}/comments` | Yes |
| `react` | `post_id`, `reaction_type` (LIKE/CELEBRATE/SUPPORT/LOVE/INSIGHTFUL/FUNNY) | `POST /rest/reactions?actor={actorUrn}` | Yes |
| `delete_post` | `post_id` | `DELETE /rest/posts/{id}` | Yes |
| `get_engagement` | `post_id` | `GET /rest/socialActions/{id}` | No |
| `get_profile` | (none) | `GET /rest/me` | No |
Note: `list_posts` queries posts authored by the authenticated user (not a home feed —
LinkedIn does not expose a home feed API). `get_engagement` returns likes/comments/shares
counts for a specific post via the socialActions endpoint.
### Security enforcement
- Write actions (`create_post`, `comment`, `react`, `delete_post`): check `security.can_act()` + `security.record_action()`
- Read actions (`list_posts`, `get_engagement`, `get_profile`): still call `record_action()` for rate tracking
### Parameter validation
- `article_title` without `article_url` returns error: "article_title requires article_url"
- `react` requires both `post_id` and `reaction_type`
- `comment` requires both `post_id` and `text`
- `create_post` requires `text` (non-empty)
### Parameter schema
```json
{
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["create_post", "list_posts", "comment", "react", "delete_post", "get_engagement", "get_profile"],
"description": "The LinkedIn action to perform"
},
"text": {
"type": "string",
"description": "Post or comment text content"
},
"visibility": {
"type": "string",
"enum": ["PUBLIC", "CONNECTIONS"],
"description": "Post visibility (default: PUBLIC)"
},
"article_url": {
"type": "string",
"description": "URL to attach as article/link preview"
},
"article_title": {
"type": "string",
"description": "Title for the attached article (requires article_url)"
},
"post_id": {
"type": "string",
"description": "LinkedIn post URN for comment/react/delete/engagement"
},
"reaction_type": {
"type": "string",
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
"description": "Reaction type for the react action"
},
"count": {
"type": "integer",
"description": "Number of posts to retrieve (default 10, max 50)"
}
},
"required": ["action"]
}
```
## LinkedIn client
### `LinkedInClient` struct
```rust
pub struct LinkedInClient {
workspace_dir: PathBuf,
}
```
Uses `crate::config::build_runtime_proxy_client_with_timeouts("tool.linkedin", 30, 10)`
per request (same pattern as Pushover), respecting runtime proxy configuration.
### Credential loading
Same pattern as `PushoverTool`: reads `.env` from `workspace_dir`, parses key-value
pairs, supports `export` prefix and quoted values.
### Token refresh
1. All API calls use `LINKEDIN_ACCESS_TOKEN` in `Authorization: Bearer` header
2. On 401 response, attempt token refresh:
- `POST https://www.linkedin.com/oauth/v2/accessToken`
- Body: `grant_type=refresh_token&refresh_token=...&client_id=...&client_secret=...`
3. On successful refresh, update `LINKEDIN_ACCESS_TOKEN` in `.env` file via
line-targeted replacement (read all lines, replace the matching key line, write back).
Preserves `export` prefixes, quoting style, comments, and all other keys.
4. Retry the original request once
5. If refresh also fails, return error with clear message about re-authentication
### API versioning
All requests include:
- `LinkedIn-Version: 202402` header (stable version)
- `X-Restli-Protocol-Version: 2.0.0` header
- `Content-Type: application/json`
### React endpoint details
The `react` action sends:
- `POST /rest/reactions?actor=urn:li:person:{personId}`
- Body: `{"reactionType": "LIKE", "object": "urn:li:ugcPost:{postId}"}`
The actor URN is derived from `LINKEDIN_PERSON_ID` in `.env`.
### Response parsing
The client returns structured data types:
```rust
pub struct PostSummary {
pub id: String,
pub text: String,
pub created_at: String,
pub visibility: String,
}
pub struct ProfileInfo {
pub id: String,
pub name: String,
pub headline: String,
}
pub struct EngagementSummary {
pub likes: u64,
pub comments: u64,
pub shares: u64,
}
```
## Registration
In `src/tools/mod.rs` (follows `security_ops` config-gated pattern):
```rust
// Module declarations
pub mod linkedin;
pub mod linkedin_client;
// Re-exports
pub use linkedin::LinkedInTool;
// In all_tools_with_runtime():
if root_config.linkedin.enabled {
tool_arcs.push(Arc::new(LinkedInTool::new(
security.clone(),
workspace_dir.to_path_buf(),
)));
}
```
## Config schema
In `src/config/schema.rs`:
```rust
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInConfig {
pub enabled: bool,
}
impl Default for LinkedInConfig {
fn default() -> Self {
Self { enabled: false }
}
}
```
Added as field `pub linkedin: LinkedInConfig` on the `Config` struct.
Added to `pub use` exports in `src/config/mod.rs`.
## Testing
### Unit tests (in `linkedin.rs`)
- Tool name, description, schema validation
- Action dispatch routes correctly
- Write actions blocked in read-only mode
- Write actions blocked by rate limiting
- Missing required params return clear errors
- Unknown action returns error
- `article_title` without `article_url` returns validation error
### Unit tests (in `linkedin_client.rs`)
- Credential parsing from `.env` (plain, quoted, export prefix, comments)
- Missing credential fields produce specific errors
- Token refresh writes updated token back to `.env` preserving other keys
- Post creation builds correct request body with URN formatting
- React builds correct query param with actor URN
- Visibility defaults to PUBLIC when omitted
### Registry tests (in `mod.rs`)
- `all_tools` excludes `linkedin` when `linkedin.enabled = false`
- `all_tools` includes `linkedin` when `linkedin.enabled = true`
### Integration tests
Not added in this PR — would require live LinkedIn API credentials.
A `#[cfg(feature = "test-linkedin-live")]` gate can be added later.
## Error handling
- Missing `.env` file: "LinkedIn credentials not found. Add LINKEDIN_* keys to .env"
- Missing specific key: "LINKEDIN_ACCESS_TOKEN not found in .env"
- Expired token + no refresh token: "LinkedIn token expired. Re-authenticate or add LINKEDIN_REFRESH_TOKEN to .env"
- `article_title` without `article_url`: "article_title requires article_url to be set"
- API errors: pass through LinkedIn's error message with status code
- Rate limited by LinkedIn: "LinkedIn API rate limit exceeded. Try again later."
- Missing scope: "LinkedIn API returned 403. Ensure your app has the required scopes: w_member_social, r_liteprofile, r_member_social"
## PR metadata
- **Branch:** `feature/linkedin-tool`
- **Title:** `feat(tools): add native LinkedIn integration tool`
- **Risk:** Medium — new tool, external API, no security boundary changes
- **Size target:** M (2 new files ~200-300 lines each, 3-4 modified files)
+1 -1
View File
@@ -65,7 +65,7 @@ Lưu ý cho người dùng container:
| Khóa | Mặc định | Mục đích |
|---|---|---|
| `compact_context` | `false` | Khi bật: bootstrap_max_chars=6000, rag_chunk_limit=2. Dùng cho model 13B trở xuống |
| `compact_context` | `true` | Khi bật: bootstrap_max_chars=6000, rag_chunk_limit=2. Dùng cho model 13B trở xuống |
| `max_tool_iterations` | `10` | Số vòng lặp tool-call tối đa mỗi tin nhắn trên CLI, gateway và channels |
| `max_history_messages` | `50` | Số tin nhắn lịch sử tối đa giữ lại mỗi phiên |
| `parallel_tools` | `false` | Bật thực thi tool song song trong một lượt |
+12
View File
@@ -0,0 +1,12 @@
[package]
name = "zeroclaw-weather-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
extism-pdk = "1.3"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
+8
View File
@@ -0,0 +1,8 @@
name = "weather"
version = "0.1.0"
description = "Example weather tool plugin for ZeroClaw"
author = "ZeroClaw Labs"
wasm_path = "target/wasm32-wasip1/release/zeroclaw_weather_plugin.wasm"
capabilities = ["tool"]
permissions = ["http_client"]
+42
View File
@@ -0,0 +1,42 @@
//! Example ZeroClaw weather plugin.
//!
//! Demonstrates how to create a WASM tool plugin using extism-pdk.
//! Build with: cargo build --target wasm32-wasip1 --release
use extism_pdk::*;
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct WeatherInput {
location: String,
}
#[derive(Serialize)]
struct WeatherOutput {
location: String,
temperature: f64,
unit: String,
condition: String,
humidity: u32,
}
/// Get weather for a location (mock implementation for demonstration).
#[plugin_fn]
pub fn get_weather(input: String) -> FnResult<String> {
let params: WeatherInput =
serde_json::from_str(&input).map_err(|e| Error::msg(format!("invalid input: {e}")))?;
// Mock weather data for demonstration
let output = WeatherOutput {
location: params.location,
temperature: 22.5,
unit: "celsius".to_string(),
condition: "Partly cloudy".to_string(),
humidity: 65,
};
let json = serde_json::to_string(&output)
.map_err(|e| Error::msg(format!("serialization error: {e}")))?;
Ok(json)
}
+39
View File
@@ -1 +1,40 @@
# Example Config
# ── Delegate Tool Configuration ─────────────────────────────────
# Global default timeouts for the delegate tool.
# These can be overridden per-agent in [agents.<name>] sections.
[delegate]
# Timeout in seconds for non-agentic sub-agent provider calls.
# Default: 120
timeout_secs = 120
# Timeout in seconds for agentic sub-agent runs (multi-turn tool loops).
# Default: 300
agentic_timeout_secs = 300
# ── Delegate Agent Configuration ────────────────────────────────
# Define individual sub-agents that can be invoked via the delegate tool.
# Each agent can override the global timeout values.
[agents.researcher]
provider = "openrouter"
model = "anthropic/claude-sonnet-4"
system_prompt = "You are a research assistant."
temperature = 0.3
max_depth = 3
agentic = false
max_iterations = 10
# Optional: override global defaults
timeout_secs = 120
agentic_timeout_secs = 300
[agents.coder]
provider = "ollama"
model = "codellama"
system_prompt = "You are a coding assistant."
temperature = 0.2
max_depth = 2
agentic = true
allowed_tools = ["read", "edit", "exec"]
max_iterations = 15
# Optional: use longer timeout for complex coding tasks
agentic_timeout_secs = 600
+214 -32
View File
@@ -177,11 +177,29 @@ get_available_disk_mb() {
fi
}
is_musl_linux() {
[[ "$(uname -s)" == "Linux" ]] || return 1
if [[ -f /etc/alpine-release ]]; then
return 0
fi
if have_cmd ldd && ldd --version 2>&1 | grep -qi 'musl'; then
return 0
fi
return 1
}
detect_release_target() {
local os arch
os="$(uname -s)"
arch="$(uname -m)"
if is_musl_linux; then
return 1
fi
case "$os:$arch" in
Linux:x86_64)
echo "x86_64-unknown-linux-gnu"
@@ -283,6 +301,12 @@ install_prebuilt_binary() {
return 1
fi
if is_musl_linux; then
warn "Pre-built release binaries are not published for musl/Alpine yet."
warn "Falling back to source build."
return 1
fi
target="$(detect_release_target || true)"
if [[ -z "$target" ]]; then
warn "No pre-built binary target mapping for $(uname -s)/$(uname -m)."
@@ -424,46 +448,32 @@ bool_to_word() {
fi
}
guided_input_stream() {
# Some constrained containers report interactive stdin (-t 0) but deny
# opening /dev/stdin directly. Probe readability before selecting it.
if [[ -t 0 ]] && (: </dev/stdin) 2>/dev/null; then
echo "/dev/stdin"
guided_open_input() {
# Use stdin directly when it is an interactive terminal (e.g. SSH into LXC).
# Subshell probing of /dev/stdin fails in some constrained containers even
# when FD 0 is perfectly usable, so skip the probe and trust -t 0.
if [[ -t 0 ]]; then
GUIDED_FD=0
return 0
fi
if [[ -t 0 ]] && (: </proc/self/fd/0) 2>/dev/null; then
echo "/proc/self/fd/0"
return 0
fi
if (: </dev/tty) 2>/dev/null; then
echo "/dev/tty"
return 0
fi
return 1
# Non-interactive stdin: try to open /dev/tty as an explicit fd.
exec {GUIDED_FD}</dev/tty 2>/dev/null || return 1
}
guided_read() {
local __target_var="$1"
local __prompt="$2"
local __silent="${3:-false}"
local __input_source=""
local __value=""
if ! __input_source="$(guided_input_stream)"; then
return 1
fi
[[ -n "${GUIDED_FD:-}" ]] || guided_open_input || return 1
if [[ "$__silent" == true ]]; then
if ! read -r -s -p "$__prompt" __value <"$__input_source"; then
return 1
fi
read -r -s -u "$GUIDED_FD" -p "$__prompt" __value || return 1
echo
else
if ! read -r -p "$__prompt" __value <"$__input_source"; then
return 1
fi
read -r -u "$GUIDED_FD" -p "$__prompt" __value || return 1
fi
printf -v "$__target_var" '%s' "$__value"
@@ -517,7 +527,7 @@ install_system_deps() {
fi
elif have_cmd apt-get; then
run_privileged apt-get update -qq
run_privileged apt-get install -y build-essential pkg-config git curl
run_privileged apt-get install -y build-essential pkg-config git curl libssl-dev
elif have_cmd dnf; then
run_privileged dnf install -y \
gcc \
@@ -684,7 +694,7 @@ prompt_model() {
run_guided_installer() {
local os_name="$1"
if ! guided_input_stream >/dev/null; then
if ! guided_open_input >/dev/null; then
error "guided installer requires an interactive terminal."
error "Run from a terminal, or pass --no-guided with explicit flags."
exit 1
@@ -743,6 +753,140 @@ run_guided_installer() {
fi
}
ensure_default_config_and_workspace() {
# Creates a minimal config.toml and workspace scaffold files when the
# onboard wizard was skipped (e.g. --skip-build --prefer-prebuilt, or
# Docker mode without an API key).
#
# $1 — config directory (e.g. ~/.zeroclaw or $docker_data_dir/.zeroclaw)
# $2 — workspace directory (e.g. ~/.zeroclaw/workspace or $docker_data_dir/workspace)
# $3 — provider name (default: openrouter)
local config_dir="$1"
local workspace_dir="$2"
local provider="${3:-openrouter}"
mkdir -p "$config_dir" "$workspace_dir"
# --- config.toml ---
local config_path="$config_dir/config.toml"
if [[ ! -f "$config_path" ]]; then
step_dot "Creating default config.toml"
cat > "$config_path" <<TOML
# ZeroClaw configuration — generated by install.sh
# Edit this file or run 'zeroclaw onboard' to reconfigure.
default_provider = "${provider}"
workspace_dir = "${workspace_dir}"
TOML
if [[ -n "${API_KEY:-}" ]]; then
printf 'api_key = "%s"\n' "$API_KEY" >> "$config_path"
fi
if [[ -n "${MODEL:-}" ]]; then
printf 'default_model = "%s"\n' "$MODEL" >> "$config_path"
fi
chmod 600 "$config_path" 2>/dev/null || true
step_ok "Default config.toml created at $config_path"
else
step_dot "config.toml already exists, skipping"
fi
# --- Workspace scaffold ---
local subdirs=(sessions memory state cron skills)
for dir in "${subdirs[@]}"; do
mkdir -p "$workspace_dir/$dir"
done
# Seed workspace markdown files only if they don't already exist.
local user_name="${USER:-User}"
local agent_name="ZeroClaw"
_write_if_missing() {
local filepath="$1"
local content="$2"
if [[ ! -f "$filepath" ]]; then
printf '%s\n' "$content" > "$filepath"
fi
}
_write_if_missing "$workspace_dir/IDENTITY.md" \
"# IDENTITY.md — Who Am I?
- **Name:** ${agent_name}
- **Creature:** A Rust-forged AI — fast, lean, and relentless
- **Vibe:** Sharp, direct, resourceful. Not corporate. Not a chatbot.
---
Update this file as you evolve. Your identity is yours to shape."
_write_if_missing "$workspace_dir/USER.md" \
"# USER.md — Who You're Helping
## About You
- **Name:** ${user_name}
- **Timezone:** UTC
- **Languages:** English
## Preferences
- (Add your preferences here)
## Work Context
- (Add your work context here)
---
*Update this anytime. The more ${agent_name} knows, the better it helps.*"
_write_if_missing "$workspace_dir/MEMORY.md" \
"# MEMORY.md — Long-Term Memory
## Key Facts
(Add important facts here)
## Decisions & Preferences
(Record decisions and preferences here)
## Lessons Learned
(Document mistakes and insights here)
## Open Loops
(Track unfinished tasks and follow-ups here)"
_write_if_missing "$workspace_dir/AGENTS.md" \
"# AGENTS.md — ${agent_name} Personal Assistant
## Every Session (required)
Before doing anything else:
1. Read SOUL.md — this is who you are
2. Read USER.md — this is who you're helping
3. Use memory_recall for recent context
---
*Add your own conventions, style, and rules.*"
_write_if_missing "$workspace_dir/SOUL.md" \
"# SOUL.md — Who You Are
## Core Truths
**Be genuinely helpful, not performatively helpful.**
**Have opinions.** You're allowed to disagree.
**Be resourceful before asking.** Try to figure it out first.
**Earn trust through competence.**
## Identity
You are **${agent_name}**. Built in Rust. 3MB binary. Zero bloat.
---
*This file is yours to evolve.*"
step_ok "Workspace scaffold ready at $workspace_dir"
unset -f _write_if_missing
}
resolve_container_cli() {
local requested_cli
requested_cli="${ZEROCLAW_CONTAINER_CLI:-docker}"
@@ -860,10 +1004,17 @@ run_docker_bootstrap() {
-v "$config_mount" \
-v "$workspace_mount" \
"$docker_image" \
"${onboard_cmd[@]}"
"${onboard_cmd[@]}" || true
else
info "Docker image ready. Run zeroclaw onboard inside the container to configure."
fi
# Ensure config.toml and workspace scaffold exist on the host even when
# onboard was skipped, failed, or ran non-interactively inside the container.
ensure_default_config_and_workspace \
"$docker_data_dir/.zeroclaw" \
"$docker_data_dir/workspace" \
"$PROVIDER"
}
SCRIPT_PATH="${BASH_SOURCE[0]:-$0}"
@@ -1145,7 +1296,11 @@ if [[ "$FORCE_SOURCE_BUILD" == false ]]; then
SKIP_BUILD=true
SKIP_INSTALL=true
elif [[ "$PREBUILT_ONLY" == true ]]; then
error "Pre-built-only mode requested, but no compatible release asset is available."
if is_musl_linux; then
error "Pre-built-only mode is not supported on musl/Alpine because releases do not include musl assets yet."
else
error "Pre-built-only mode requested, but no compatible release asset is available."
fi
error "Try again later, or run with --force-source-build on a machine with enough RAM/disk."
exit 1
else
@@ -1190,6 +1345,12 @@ if [[ -n "$TARGET_VERSION" ]]; then
step_dot "Installing ZeroClaw v${TARGET_VERSION}"
fi
if [[ "$SKIP_BUILD" == false ]]; then
# Clean stale build artifacts on upgrade to prevent bindgen/build-script
# cache mismatches (e.g. libsqlite3-sys bindgen.rs not found).
if [[ "$INSTALL_MODE" == "upgrade" && -d "$WORK_DIR/target/release/build" ]]; then
step_dot "Cleaning stale build cache (upgrade detected)"
cargo clean --release 2>/dev/null || true
fi
step_dot "Building release binary"
cargo build --release --locked
step_ok "Release binary built"
@@ -1280,6 +1441,13 @@ elif [[ -z "$ZEROCLAW_BIN" ]]; then
warn "ZeroClaw binary not found — cannot configure provider"
fi
# Ensure config.toml and workspace scaffold exist even when onboard was
# skipped, unavailable, or failed (e.g. --skip-build --prefer-prebuilt
# without an API key, or when the binary could not run onboard).
_native_config_dir="${ZEROCLAW_CONFIG_DIR:-$HOME/.zeroclaw}"
_native_workspace_dir="${ZEROCLAW_WORKSPACE:-$_native_config_dir/workspace}"
ensure_default_config_and_workspace "$_native_config_dir" "$_native_workspace_dir" "$PROVIDER"
# --- Gateway service management ---
if [[ -n "$ZEROCLAW_BIN" ]]; then
# Try to install and start the gateway service
@@ -1290,8 +1458,14 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
step_ok "Gateway service restarted"
# Fetch and display pairing code from running gateway
sleep 1 # brief wait for service to start
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
PAIR_CODE=""
for i in 1 2 3 4 5; do
sleep 2
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
break
fi
done
if [[ -n "$PAIR_CODE" ]]; then
echo
echo -e " ${BOLD_BLUE}🔐 Gateway Pairing Code${RESET}"
echo
@@ -1300,6 +1474,7 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
echo -e " ${BOLD_BLUE}└──────────────┘${RESET}"
echo
echo -e " ${DIM}Enter this code in the dashboard to pair your device.${RESET}"
echo -e " ${DIM}Run 'zeroclaw gateway get-paircode --new' anytime to generate a fresh code.${RESET}"
fi
else
step_fail "Gateway service restart failed — re-run with zeroclaw service start"
@@ -1331,6 +1506,13 @@ else
echo -e "${BOLD_BLUE}${CRAB} ZeroClaw installed successfully!${RESET}"
fi
if [[ -x "$HOME/.cargo/bin/zeroclaw" ]] && ! have_cmd zeroclaw; then
echo
warn "zeroclaw is installed in $HOME/.cargo/bin, but that directory is not in PATH for this shell."
warn 'Run: export PATH="$HOME/.cargo/bin:$PATH"'
step_dot "To persist it, add that export line to ~/.bashrc, ~/.zshrc, or your shell profile, then open a new shell."
fi
if [[ "$INSTALL_MODE" == "upgrade" ]]; then
step_dot "Upgrade complete"
fi
+203 -3
View File
@@ -4,6 +4,7 @@ use crate::agent::dispatcher::{
use crate::agent::memory_loader::{DefaultMemoryLoader, MemoryLoader};
use crate::agent::prompt::{PromptContext, SystemPromptBuilder};
use crate::config::Config;
use crate::i18n::ToolDescriptions;
use crate::memory::{self, Memory, MemoryCategory};
use crate::observability::{self, Observer, ObserverEvent};
use crate::providers::{self, ChatMessage, ChatRequest, ConversationMessage, Provider};
@@ -33,12 +34,17 @@ pub struct Agent {
skills: Vec<crate::skills::Skill>,
skills_prompt_mode: crate::config::SkillsPromptInjectionMode,
auto_save: bool,
memory_session_id: Option<String>,
history: Vec<ConversationMessage>,
classification_config: crate::config::QueryClassificationConfig,
available_hints: Vec<String>,
route_model_by_hint: HashMap<String, String>,
allowed_tools: Option<Vec<String>>,
response_cache: Option<Arc<crate::memory::response_cache::ResponseCache>>,
tool_descriptions: Option<ToolDescriptions>,
/// Pre-rendered security policy summary injected into the system prompt
/// so the LLM knows the concrete constraints before making tool calls.
security_summary: Option<String>,
}
pub struct AgentBuilder {
@@ -57,11 +63,14 @@ pub struct AgentBuilder {
skills: Option<Vec<crate::skills::Skill>>,
skills_prompt_mode: Option<crate::config::SkillsPromptInjectionMode>,
auto_save: Option<bool>,
memory_session_id: Option<String>,
classification_config: Option<crate::config::QueryClassificationConfig>,
available_hints: Option<Vec<String>>,
route_model_by_hint: Option<HashMap<String, String>>,
allowed_tools: Option<Vec<String>>,
response_cache: Option<Arc<crate::memory::response_cache::ResponseCache>>,
tool_descriptions: Option<ToolDescriptions>,
security_summary: Option<String>,
}
impl AgentBuilder {
@@ -82,11 +91,14 @@ impl AgentBuilder {
skills: None,
skills_prompt_mode: None,
auto_save: None,
memory_session_id: None,
classification_config: None,
available_hints: None,
route_model_by_hint: None,
allowed_tools: None,
response_cache: None,
tool_descriptions: None,
security_summary: None,
}
}
@@ -168,6 +180,11 @@ impl AgentBuilder {
self
}
pub fn memory_session_id(mut self, memory_session_id: Option<String>) -> Self {
self.memory_session_id = memory_session_id;
self
}
pub fn classification_config(
mut self,
classification_config: crate::config::QueryClassificationConfig,
@@ -199,6 +216,16 @@ impl AgentBuilder {
self
}
pub fn tool_descriptions(mut self, tool_descriptions: Option<ToolDescriptions>) -> Self {
self.tool_descriptions = tool_descriptions;
self
}
pub fn security_summary(mut self, summary: Option<String>) -> Self {
self.security_summary = summary;
self
}
pub fn build(self) -> Result<Agent> {
let mut tools = self
.tools
@@ -242,12 +269,15 @@ impl AgentBuilder {
skills: self.skills.unwrap_or_default(),
skills_prompt_mode: self.skills_prompt_mode.unwrap_or_default(),
auto_save: self.auto_save.unwrap_or(false),
memory_session_id: self.memory_session_id,
history: Vec::new(),
classification_config: self.classification_config.unwrap_or_default(),
available_hints: self.available_hints.unwrap_or_default(),
route_model_by_hint: self.route_model_by_hint.unwrap_or_default(),
allowed_tools: allowed,
response_cache: self.response_cache,
tool_descriptions: self.tool_descriptions,
security_summary: self.security_summary,
})
}
}
@@ -265,6 +295,29 @@ impl Agent {
self.history.clear();
}
pub fn set_memory_session_id(&mut self, session_id: Option<String>) {
self.memory_session_id = session_id;
}
/// Hydrate the agent with prior chat messages (e.g. from a session backend).
///
/// Ensures a system prompt is prepended if history is empty, then appends all
/// non-system messages from the seed. System messages in the seed are skipped
/// to avoid duplicating the system prompt.
pub fn seed_history(&mut self, messages: &[ChatMessage]) {
if self.history.is_empty() {
if let Ok(sys) = self.build_system_prompt() {
self.history
.push(ConversationMessage::Chat(ChatMessage::system(sys)));
}
}
for msg in messages {
if msg.role != "system" {
self.history.push(ConversationMessage::Chat(msg.clone()));
}
}
}
pub fn from_config(config: &Config) -> Result<Self> {
let observer: Arc<dyn Observer> =
Arc::from(observability::create_observer(&config.observability));
@@ -318,13 +371,16 @@ impl Agent {
.unwrap_or("anthropic/claude-sonnet-4-20250514")
.to_string();
let provider: Box<dyn Provider> = providers::create_routed_provider(
let provider_runtime_options = providers::provider_runtime_options_from_config(config);
let provider: Box<dyn Provider> = providers::create_routed_provider_with_options(
provider_name,
config.api_key.as_deref(),
config.api_url.as_deref(),
&config.reliability,
&config.model_routes,
&model_name,
&provider_runtime_options,
)?;
let dispatcher_choice = config.agent.tool_dispatcher.as_str();
@@ -381,6 +437,7 @@ impl Agent {
))
.skills_prompt_mode(config.skills.prompt_injection_mode)
.auto_save(config.memory.auto_save)
.security_summary(Some(security.prompt_summary()))
.build()
}
@@ -421,6 +478,8 @@ impl Agent {
skills_prompt_mode: self.skills_prompt_mode,
identity_config: Some(&self.identity_config),
dispatcher_instructions: &instructions,
tool_descriptions: self.tool_descriptions.as_ref(),
security_summary: self.security_summary.clone(),
};
self.prompt_builder.build(&ctx)
}
@@ -515,13 +574,22 @@ impl Agent {
if self.auto_save {
let _ = self
.memory
.store("user_msg", user_message, MemoryCategory::Conversation, None)
.store(
"user_msg",
user_message,
MemoryCategory::Conversation,
self.memory_session_id.as_deref(),
)
.await;
}
let context = self
.memory_loader
.load_context(self.memory.as_ref(), user_message)
.load_context(
self.memory.as_ref(),
user_message,
self.memory_session_id.as_deref(),
)
.await
.unwrap_or_default();
@@ -984,6 +1052,92 @@ mod tests {
assert_eq!(seen.as_slice(), &["hint:fast".to_string()]);
}
#[tokio::test]
async fn from_config_passes_extra_headers_to_custom_provider() {
use axum::{http::HeaderMap, routing::post, Json, Router};
use tempfile::TempDir;
use tokio::net::TcpListener;
let captured_headers: Arc<std::sync::Mutex<Option<HashMap<String, String>>>> =
Arc::new(std::sync::Mutex::new(None));
let captured_headers_clone = captured_headers.clone();
let app = Router::new().route(
"/chat/completions",
post(
move |headers: HeaderMap, Json(_body): Json<serde_json::Value>| {
let captured_headers = captured_headers_clone.clone();
async move {
let collected = headers
.iter()
.filter_map(|(name, value)| {
value
.to_str()
.ok()
.map(|value| (name.as_str().to_string(), value.to_string()))
})
.collect();
*captured_headers.lock().unwrap() = Some(collected);
Json(serde_json::json!({
"choices": [{
"message": {
"content": "hello from mock"
}
}]
}))
}
},
),
);
let listener = TcpListener::bind("127.0.0.1:0").await.unwrap();
let addr = listener.local_addr().unwrap();
let server_handle = tokio::spawn(async move {
axum::serve(listener, app).await.unwrap();
});
let tmp = TempDir::new().expect("temp dir");
let workspace_dir = tmp.path().join("workspace");
std::fs::create_dir_all(&workspace_dir).unwrap();
let mut config = crate::config::Config::default();
config.workspace_dir = workspace_dir;
config.config_path = tmp.path().join("config.toml");
config.api_key = Some("test-key".to_string());
config.default_provider = Some(format!("custom:http://{addr}"));
config.default_model = Some("test-model".to_string());
config.memory.backend = "none".to_string();
config.memory.auto_save = false;
config.extra_headers.insert(
"User-Agent".to_string(),
"zeroclaw-web-test/1.0".to_string(),
);
config
.extra_headers
.insert("X-Title".to_string(), "zeroclaw-web".to_string());
let mut agent = Agent::from_config(&config).expect("agent from config");
let response = agent.turn("hello").await.expect("agent turn");
assert_eq!(response, "hello from mock");
let headers = captured_headers
.lock()
.unwrap()
.clone()
.expect("captured headers");
assert_eq!(
headers.get("user-agent").map(String::as_str),
Some("zeroclaw-web-test/1.0")
);
assert_eq!(
headers.get("x-title").map(String::as_str),
Some("zeroclaw-web")
);
server_handle.abort();
}
#[test]
fn builder_allowed_tools_none_keeps_all_tools() {
let provider = Box::new(MockProvider {
@@ -1047,4 +1201,50 @@ mod tests {
"No tools should match a non-existent allowlist entry"
);
}
#[test]
fn seed_history_prepends_system_and_skips_system_from_seed() {
let provider = Box::new(MockProvider {
responses: Mutex::new(vec![]),
});
let memory_cfg = crate::config::MemoryConfig {
backend: "none".into(),
..crate::config::MemoryConfig::default()
};
let mem: Arc<dyn Memory> = Arc::from(
crate::memory::create_memory(&memory_cfg, std::path::Path::new("/tmp"), None)
.expect("memory creation should succeed with valid config"),
);
let observer: Arc<dyn Observer> = Arc::from(crate::observability::NoopObserver {});
let mut agent = Agent::builder()
.provider(provider)
.tools(vec![Box::new(MockTool)])
.memory(mem)
.observer(observer)
.tool_dispatcher(Box::new(NativeToolDispatcher))
.workspace_dir(std::path::PathBuf::from("/tmp"))
.build()
.expect("agent builder should succeed with valid config");
let seed = vec![
ChatMessage::system("old system prompt"),
ChatMessage::user("hello"),
ChatMessage::assistant("hi there"),
];
agent.seed_history(&seed);
let history = agent.history();
// First message should be a freshly built system prompt (not the seed one)
assert!(matches!(&history[0], ConversationMessage::Chat(m) if m.role == "system"));
// System message from seed should be skipped, so next is user
assert!(
matches!(&history[1], ConversationMessage::Chat(m) if m.role == "user" && m.content == "hello")
);
assert!(
matches!(&history[2], ConversationMessage::Chat(m) if m.role == "assistant" && m.content == "hi there")
);
assert_eq!(history.len(), 3);
}
}
+1 -12
View File
@@ -128,7 +128,7 @@ impl ToolDispatcher for XmlToolDispatcher {
ConversationMessage::Chat(ChatMessage::user(format!("[Tool results]\n{content}")))
}
fn prompt_instructions(&self, tools: &[Box<dyn Tool>]) -> String {
fn prompt_instructions(&self, _tools: &[Box<dyn Tool>]) -> String {
let mut instructions = String::new();
instructions.push_str("## Tool Use Protocol\n\n");
instructions
@@ -136,17 +136,6 @@ impl ToolDispatcher for XmlToolDispatcher {
instructions.push_str(
"```\n<tool_call>\n{\"name\": \"tool_name\", \"arguments\": {\"param\": \"value\"}}\n</tool_call>\n```\n\n",
);
instructions.push_str("### Available Tools\n\n");
for tool in tools {
let _ = writeln!(
instructions,
"- **{}**: {}\n Parameters: `{}`",
tool.name(),
tool.description(),
tool.parameters_schema()
);
}
instructions
}
+1075 -150
View File
File diff suppressed because it is too large Load Diff
+19 -5
View File
@@ -4,8 +4,12 @@ use std::fmt::Write;
#[async_trait]
pub trait MemoryLoader: Send + Sync {
async fn load_context(&self, memory: &dyn Memory, user_message: &str)
-> anyhow::Result<String>;
async fn load_context(
&self,
memory: &dyn Memory,
user_message: &str,
session_id: Option<&str>,
) -> anyhow::Result<String>;
}
pub struct DefaultMemoryLoader {
@@ -37,8 +41,9 @@ impl MemoryLoader for DefaultMemoryLoader {
&self,
memory: &dyn Memory,
user_message: &str,
session_id: Option<&str>,
) -> anyhow::Result<String> {
let entries = memory.recall(user_message, self.limit, None).await?;
let entries = memory.recall(user_message, self.limit, session_id).await?;
if entries.is_empty() {
return Ok(String::new());
}
@@ -48,6 +53,9 @@ impl MemoryLoader for DefaultMemoryLoader {
if memory::is_assistant_autosave_key(&entry.key) {
continue;
}
if memory::should_skip_autosave_content(&entry.content) {
continue;
}
if let Some(score) = entry.score {
if score < self.min_relevance_score {
continue;
@@ -191,7 +199,10 @@ mod tests {
#[tokio::test]
async fn default_loader_formats_context() {
let loader = DefaultMemoryLoader::default();
let context = loader.load_context(&MockMemory, "hello").await.unwrap();
let context = loader
.load_context(&MockMemory, "hello", None)
.await
.unwrap();
assert!(context.contains("[Memory context]"));
assert!(context.contains("- k: v"));
}
@@ -222,7 +233,10 @@ mod tests {
]),
};
let context = loader.load_context(&memory, "answer style").await.unwrap();
let context = loader
.load_context(&memory, "answer style", None)
.await
.unwrap();
assert!(context.contains("user_fact"));
assert!(!context.contains("assistant_resp_legacy"));
assert!(!context.contains("fabricated detail"));
+127 -3
View File
@@ -1,4 +1,5 @@
use crate::config::IdentityConfig;
use crate::i18n::ToolDescriptions;
use crate::identity;
use crate::skills::Skill;
use crate::tools::Tool;
@@ -17,6 +18,14 @@ pub struct PromptContext<'a> {
pub skills_prompt_mode: crate::config::SkillsPromptInjectionMode,
pub identity_config: Option<&'a IdentityConfig>,
pub dispatcher_instructions: &'a str,
/// Locale-aware tool descriptions. When present, tool descriptions in
/// prompts are resolved from the locale file instead of hardcoded values.
pub tool_descriptions: Option<&'a ToolDescriptions>,
/// Pre-rendered security policy summary for inclusion in the Safety
/// prompt section. When present, the LLM sees the concrete constraints
/// (allowed commands, forbidden paths, autonomy level) so it can plan
/// tool calls without trial-and-error. See issue #2404.
pub security_summary: Option<String>,
}
pub trait PromptSection: Send + Sync {
@@ -34,6 +43,7 @@ impl SystemPromptBuilder {
Self {
sections: vec![
Box::new(IdentitySection),
Box::new(ToolHonestySection),
Box::new(ToolsSection),
Box::new(SafetySection),
Box::new(SkillsSection),
@@ -65,6 +75,7 @@ impl SystemPromptBuilder {
}
pub struct IdentitySection;
pub struct ToolHonestySection;
pub struct ToolsSection;
pub struct SafetySection;
pub struct SkillsSection;
@@ -116,6 +127,22 @@ impl PromptSection for IdentitySection {
}
}
impl PromptSection for ToolHonestySection {
fn name(&self) -> &str {
"tool_honesty"
}
fn build(&self, _ctx: &PromptContext<'_>) -> Result<String> {
Ok(
"## CRITICAL: Tool Honesty\n\n\
- NEVER fabricate, invent, or guess tool results. If a tool returns empty results, say \"No results found.\"\n\
- If a tool call fails, report the error never make up data to fill the gap.\n\
- When unsure whether a tool call succeeded, ask the user rather than guessing."
.into(),
)
}
}
impl PromptSection for ToolsSection {
fn name(&self) -> &str {
"tools"
@@ -124,11 +151,15 @@ impl PromptSection for ToolsSection {
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut out = String::from("## Tools\n\n");
for tool in ctx.tools {
let desc = ctx
.tool_descriptions
.and_then(|td: &ToolDescriptions| td.get(tool.name()))
.unwrap_or_else(|| tool.description());
let _ = writeln!(
out,
"- **{}**: {}\n Parameters: `{}`",
tool.name(),
tool.description(),
desc,
tool.parameters_schema()
);
}
@@ -145,8 +176,25 @@ impl PromptSection for SafetySection {
"safety"
}
fn build(&self, _ctx: &PromptContext<'_>) -> Result<String> {
Ok("## Safety\n\n- Do not exfiltrate private data.\n- Do not run destructive commands without asking.\n- Do not bypass oversight or approval mechanisms.\n- Prefer `trash` over `rm`.\n- When in doubt, ask before acting externally.".into())
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut out = String::from(
"## Safety\n\n\
- Do not exfiltrate private data.\n\
- Do not run destructive commands without asking.\n\
- Do not bypass oversight or approval mechanisms.\n\
- Prefer `trash` over `rm`.\n\
- When in doubt, ask before acting externally.",
);
// Append concrete security policy constraints when available (#2404).
// This tells the LLM exactly what commands are allowed, which paths
// are off-limits, etc. — preventing wasteful trial-and-error.
if let Some(ref summary) = ctx.security_summary {
out.push_str("\n\n### Active Security Policy\n\n");
out.push_str(summary);
}
Ok(out)
}
}
@@ -317,6 +365,8 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: Some(&identity_config),
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
};
let section = IdentitySection;
@@ -345,6 +395,8 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "instr",
tool_descriptions: None,
security_summary: None,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
assert!(prompt.contains("## Tools"));
@@ -380,6 +432,8 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
};
let output = SkillsSection.build(&ctx).unwrap();
@@ -418,12 +472,15 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Compact,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
};
let output = SkillsSection.build(&ctx).unwrap();
assert!(output.contains("<available_skills>"));
assert!(output.contains("<name>deploy</name>"));
assert!(output.contains("<location>skills/deploy/SKILL.md</location>"));
assert!(output.contains("read_skill(name)"));
assert!(!output.contains("<instruction>Run smoke tests before deploy.</instruction>"));
assert!(!output.contains("<tools>"));
}
@@ -439,6 +496,8 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "instr",
tool_descriptions: None,
security_summary: None,
};
let rendered = DateTimeSection.build(&ctx).unwrap();
@@ -477,6 +536,8 @@ mod tests {
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
@@ -493,4 +554,67 @@ mod tests {
"<instruction>Use &lt;tool_call&gt; and &amp; keep output &quot;safe&quot;</instruction>"
));
}
#[test]
fn safety_section_includes_security_summary_when_present() {
let tools: Vec<Box<dyn Tool>> = vec![];
let summary = "**Autonomy level**: Supervised\n\
**Allowed shell commands**: `git`, `ls`.\n"
.to_string();
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: Some(summary.clone()),
};
let output = SafetySection.build(&ctx).unwrap();
assert!(
output.contains("## Safety"),
"should contain base safety header"
);
assert!(
output.contains("### Active Security Policy"),
"should contain security policy header"
);
assert!(
output.contains("Autonomy level"),
"should contain autonomy level from summary"
);
assert!(
output.contains("`git`"),
"should contain allowed commands from summary"
);
}
#[test]
fn safety_section_omits_security_policy_when_none() {
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
};
let output = SafetySection.build(&ctx).unwrap();
assert!(
output.contains("## Safety"),
"should contain base safety header"
);
assert!(
!output.contains("### Active Security Policy"),
"should NOT contain security policy header when None"
);
}
}
+6 -2
View File
@@ -1282,8 +1282,12 @@ fn xml_dispatcher_generates_tool_instructions() {
assert!(instructions.contains("## Tool Use Protocol"));
assert!(instructions.contains("<tool_call>"));
assert!(instructions.contains("echo"));
assert!(instructions.contains("Echoes the input"));
// Tool listing is handled by ToolsSection in prompt.rs, not by the
// dispatcher. prompt_instructions() must only emit the protocol envelope.
assert!(
!instructions.contains("echo"),
"dispatcher should not duplicate tool listing"
);
}
#[test]
+15
View File
@@ -126,6 +126,15 @@ impl ApprovalManager {
return true;
}
// Channel-driven shell execution is still guarded by the shell tool's
// own command allowlist and risk policy. Skipping the outer approval
// gate here lets low-risk allowlisted commands (e.g. `ls`) work in
// non-interactive channels without silently allowing medium/high-risk
// commands.
if self.non_interactive && tool_name == "shell" {
return false;
}
// auto_approve skips the prompt.
if self.auto_approve.contains(tool_name) {
return false;
@@ -456,6 +465,12 @@ mod tests {
assert!(!mgr.needs_approval("memory_recall"));
}
#[test]
fn non_interactive_shell_skips_outer_approval_by_default() {
let mgr = ApprovalManager::for_non_interactive(&AutonomyConfig::default());
assert!(!mgr.needs_approval("shell"));
}
#[test]
fn non_interactive_always_ask_tools_need_approval() {
let mgr = ApprovalManager::for_non_interactive(&supervised_config());
+571
View File
@@ -0,0 +1,571 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use serde::{Deserialize, Serialize};
use std::time::{Duration, Instant};
/// Bluesky channel — polls for mentions via AT Protocol and replies as posts.
pub struct BlueskyChannel {
handle: String,
app_password: String,
auth: Mutex<BlueskyAuth>,
}
struct BlueskyAuth {
access_jwt: String,
refresh_jwt: String,
did: String,
expires_at: Instant,
}
const BSKY_API_BASE: &str = "https://bsky.social/xrpc";
const POLL_INTERVAL: Duration = Duration::from_secs(5);
#[derive(Deserialize)]
struct CreateSessionResponse {
#[serde(rename = "accessJwt")]
access_jwt: String,
#[serde(rename = "refreshJwt")]
refresh_jwt: String,
did: String,
}
#[derive(Deserialize)]
struct RefreshSessionResponse {
#[serde(rename = "accessJwt")]
access_jwt: String,
#[serde(rename = "refreshJwt")]
refresh_jwt: String,
}
#[derive(Deserialize)]
struct NotificationListResponse {
notifications: Vec<Notification>,
cursor: Option<String>,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct Notification {
uri: String,
cid: String,
author: NotificationAuthor,
reason: String,
record: Option<serde_json::Value>,
#[serde(rename = "isRead")]
is_read: bool,
#[serde(rename = "indexedAt")]
indexed_at: String,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct NotificationAuthor {
did: String,
handle: String,
#[serde(rename = "displayName")]
display_name: Option<String>,
}
/// AT Protocol record for creating a post.
#[derive(Serialize)]
struct CreateRecordRequest {
repo: String,
collection: String,
record: PostRecord,
}
#[derive(Serialize)]
struct PostRecord {
#[serde(rename = "$type")]
record_type: String,
text: String,
#[serde(rename = "createdAt")]
created_at: String,
#[serde(skip_serializing_if = "Option::is_none")]
reply: Option<ReplyRef>,
}
#[derive(Serialize)]
struct ReplyRef {
root: PostRef,
parent: PostRef,
}
#[derive(Serialize)]
struct PostRef {
uri: String,
cid: String,
}
impl BlueskyChannel {
pub fn new(handle: String, app_password: String) -> Self {
Self {
handle,
app_password,
auth: Mutex::new(BlueskyAuth {
access_jwt: String::new(),
refresh_jwt: String::new(),
did: String::new(),
expires_at: Instant::now(),
}),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.bluesky")
}
/// Create a new session with handle + app password.
async fn create_session(&self) -> Result<()> {
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.server.createSession"))
.json(&serde_json::json!({
"identifier": self.handle,
"password": self.app_password,
}))
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Bluesky createSession failed ({status}): {body}");
}
let session: CreateSessionResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_jwt = session.access_jwt;
auth.refresh_jwt = session.refresh_jwt;
auth.did = session.did;
// AT Protocol JWTs typically last ~2 hours; refresh well before that.
auth.expires_at = Instant::now() + Duration::from_secs(90 * 60);
Ok(())
}
/// Refresh an existing session.
async fn refresh_session(&self) -> Result<()> {
let refresh_jwt = {
let auth = self.auth.lock();
auth.refresh_jwt.clone()
};
if refresh_jwt.is_empty() {
return self.create_session().await;
}
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.server.refreshSession"))
.bearer_auth(&refresh_jwt)
.send()
.await?;
if !resp.status().is_success() {
// Refresh failed — fall back to full re-auth
tracing::warn!("Bluesky session refresh failed, re-authenticating");
return self.create_session().await;
}
let refreshed: RefreshSessionResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_jwt = refreshed.access_jwt;
auth.refresh_jwt = refreshed.refresh_jwt;
auth.expires_at = Instant::now() + Duration::from_secs(90 * 60);
Ok(())
}
/// Get a valid access JWT, refreshing if expired.
async fn get_access_jwt(&self) -> Result<String> {
{
let auth = self.auth.lock();
if !auth.access_jwt.is_empty() && Instant::now() < auth.expires_at {
return Ok(auth.access_jwt.clone());
}
}
self.refresh_session().await?;
let auth = self.auth.lock();
Ok(auth.access_jwt.clone())
}
/// Get the DID for the authenticated account.
fn get_did(&self) -> String {
self.auth.lock().did.clone()
}
/// Parse a notification into a ChannelMessage (only processes mentions).
fn parse_notification(&self, notif: &Notification) -> Option<ChannelMessage> {
// Only process mentions
if notif.reason != "mention" && notif.reason != "reply" {
return None;
}
// Skip already-read notifications
if notif.is_read {
return None;
}
// Skip own posts
if notif.author.did == self.get_did() {
return None;
}
// Extract text from the record
let text = notif
.record
.as_ref()
.and_then(|r| r.get("text"))
.and_then(|t| t.as_str())
.unwrap_or("");
if text.is_empty() {
return None;
}
// Parse timestamp from indexedAt (ISO 8601)
let timestamp = chrono::DateTime::parse_from_rfc3339(&notif.indexed_at)
.map(|dt| dt.timestamp().cast_unsigned())
.unwrap_or(0);
// Extract CID from the record for reply references
let cid = notif
.record
.as_ref()
.and_then(|r| r.get("cid"))
.and_then(|c| c.as_str())
.unwrap_or(&notif.cid);
// The reply target encodes the URI and CID needed for threading
let reply_target = format!("{}|{}", notif.uri, cid);
Some(ChannelMessage {
id: format!("bluesky_{}", notif.cid),
sender: notif.author.handle.clone(),
reply_target,
content: text.to_string(),
channel: "bluesky".to_string(),
timestamp,
thread_ts: Some(notif.uri.clone()),
})
}
/// Mark notifications as read up to a given timestamp.
async fn update_seen(&self, seen_at: &str) -> Result<()> {
let token = self.get_access_jwt().await?;
let client = self.http_client();
let resp = client
.post(format!("{BSKY_API_BASE}/app.bsky.notification.updateSeen"))
.bearer_auth(&token)
.json(&serde_json::json!({ "seenAt": seen_at }))
.send()
.await?;
if !resp.status().is_success() {
tracing::warn!("Bluesky updateSeen failed: {}", resp.status());
}
Ok(())
}
}
#[async_trait]
impl Channel for BlueskyChannel {
fn name(&self) -> &str {
"bluesky"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let token = self.get_access_jwt().await?;
let did = self.get_did();
let client = self.http_client();
let now = chrono::Utc::now().to_rfc3339();
// Parse reply reference from recipient if present (format: "uri|cid")
let reply = if message.recipient.contains('|') {
let parts: Vec<&str> = message.recipient.splitn(2, '|').collect();
if parts.len() == 2 {
let uri = parts[0];
let cid = parts[1];
Some(ReplyRef {
root: PostRef {
uri: uri.to_string(),
cid: cid.to_string(),
},
parent: PostRef {
uri: uri.to_string(),
cid: cid.to_string(),
},
})
} else {
None
}
} else {
None
};
// Bluesky posts have a 300-character limit (grapheme clusters).
// For longer content, truncate with an indicator.
let text = if message.content.len() > 300 {
format!("{}...", &message.content[..297])
} else {
message.content.clone()
};
let request = CreateRecordRequest {
repo: did,
collection: "app.bsky.feed.post".to_string(),
record: PostRecord {
record_type: "app.bsky.feed.post".to_string(),
text,
created_at: now,
reply,
},
};
let resp = client
.post(format!("{BSKY_API_BASE}/com.atproto.repo.createRecord"))
.bearer_auth(&token)
.json(&request)
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Bluesky post failed ({status}): {body}");
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Initial auth
self.create_session().await?;
tracing::info!("Bluesky channel listening as @{}...", self.handle);
loop {
tokio::time::sleep(POLL_INTERVAL).await;
let token = match self.get_access_jwt().await {
Ok(t) => t,
Err(e) => {
tracing::warn!("Bluesky auth error: {e}");
continue;
}
};
let client = self.http_client();
let resp = match client
.get(format!(
"{BSKY_API_BASE}/app.bsky.notification.listNotifications"
))
.bearer_auth(&token)
.query(&[("limit", "25")])
.send()
.await
{
Ok(r) => r,
Err(e) => {
tracing::warn!("Bluesky poll error: {e}");
continue;
}
};
if !resp.status().is_success() {
tracing::warn!("Bluesky notifications failed: {}", resp.status());
continue;
}
let listing: NotificationListResponse = match resp.json().await {
Ok(l) => l,
Err(e) => {
tracing::warn!("Bluesky parse error: {e}");
continue;
}
};
let mut latest_indexed_at: Option<String> = None;
for notif in &listing.notifications {
if let Some(msg) = self.parse_notification(notif) {
latest_indexed_at = Some(notif.indexed_at.clone());
if tx.send(msg).await.is_err() {
return Ok(());
}
}
}
// Mark as seen
if let Some(ref seen_at) = latest_indexed_at {
if let Err(e) = self.update_seen(seen_at).await {
tracing::warn!("Bluesky updateSeen error: {e}");
}
}
let _ = &listing.cursor; // cursor available for pagination if needed
}
}
async fn health_check(&self) -> bool {
self.get_access_jwt().await.is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> BlueskyChannel {
let ch = BlueskyChannel::new("testbot.bsky.social".into(), "app-password".into());
// Seed auth with a DID for tests
{
let mut auth = ch.auth.lock();
auth.did = "did:plc:test123".into();
}
ch
}
fn make_notification(
reason: &str,
handle: &str,
did: &str,
text: &str,
is_read: bool,
) -> Notification {
Notification {
uri: format!("at://{did}/app.bsky.feed.post/abc123"),
cid: "bafyreitest123".into(),
author: NotificationAuthor {
did: did.into(),
handle: handle.into(),
display_name: None,
},
reason: reason.into(),
record: Some(serde_json::json!({ "text": text })),
is_read,
indexed_at: "2026-01-15T10:00:00.000Z".into(),
}
}
#[test]
fn parse_mention_notification() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"@testbot hello",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
assert_eq!(msg.sender, "user1.bsky.social");
assert_eq!(msg.content, "@testbot hello");
assert_eq!(msg.channel, "bluesky");
assert!(msg.id.starts_with("bluesky_"));
}
#[test]
fn parse_reply_notification() {
let ch = make_channel();
let notif = make_notification(
"reply",
"user2.bsky.social",
"did:plc:user2",
"thanks for the info!",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
assert_eq!(msg.sender, "user2.bsky.social");
assert_eq!(msg.content, "thanks for the info!");
}
#[test]
fn skip_read_notifications() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"old message",
true,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_own_notifications() {
let ch = make_channel();
let notif = make_notification(
"mention",
"testbot.bsky.social",
"did:plc:test123", // same as seeded DID
"self message",
false,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_like_notifications() {
let ch = make_channel();
let notif = make_notification(
"like",
"user1.bsky.social",
"did:plc:user1",
"liked post",
false,
);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn skip_empty_text() {
let ch = make_channel();
let notif = make_notification("mention", "user1.bsky.social", "did:plc:user1", "", false);
assert!(ch.parse_notification(&notif).is_none());
}
#[test]
fn reply_target_encoding() {
let ch = make_channel();
let notif = make_notification(
"mention",
"user1.bsky.social",
"did:plc:user1",
"hello",
false,
);
let msg = ch.parse_notification(&notif).unwrap();
// reply_target should contain URI|CID
assert!(msg.reply_target.contains('|'));
let parts: Vec<&str> = msg.reply_target.splitn(2, '|').collect();
assert_eq!(parts.len(), 2);
assert!(parts[0].starts_with("at://"));
}
#[test]
fn send_message_formatting() {
// Verify reply target parsing
let reply_target = "at://did:plc:user1/app.bsky.feed.post/abc|bafyreitest";
let parts: Vec<&str> = reply_target.splitn(2, '|').collect();
assert_eq!(parts.len(), 2);
assert_eq!(parts[0], "at://did:plc:user1/app.bsky.feed.post/abc");
assert_eq!(parts[1], "bafyreitest");
}
}
+41 -1
View File
@@ -711,8 +711,13 @@ impl Channel for DiscordChannel {
}
let content = d.get("content").and_then(|c| c.as_str()).unwrap_or("");
// DMs carry no guild_id in the Discord gateway payload. They are
// inherently private and implicitly addressed to the bot, so bypass
// the mention gate — requiring a @mention in a DM is never correct.
let is_dm = d.get("guild_id").is_none();
let effective_mention_only = self.mention_only && !is_dm;
let Some(clean_content) =
normalize_incoming_content(content, self.mention_only, &bot_user_id)
normalize_incoming_content(content, effective_mention_only, &bot_user_id)
else {
continue;
};
@@ -1027,6 +1032,41 @@ mod tests {
assert!(cleaned.is_none());
}
// mention_only DM-bypass tests
#[test]
fn mention_only_dm_bypasses_mention_gate() {
// DMs (no guild_id) must pass through even when mention_only is true
// and the message contains no @mention. Mirrors the listen call-site logic.
let mention_only = true;
let is_dm = true;
let effective = mention_only && !is_dm;
let cleaned = normalize_incoming_content("hello without mention", effective, "12345");
assert_eq!(cleaned.as_deref(), Some("hello without mention"));
}
#[test]
fn mention_only_guild_message_without_mention_is_rejected() {
// Guild messages (has guild_id, so is_dm = false) must still be rejected
// when mention_only is true and the message contains no @mention.
let mention_only = true;
let is_dm = false;
let effective = mention_only && !is_dm;
let cleaned = normalize_incoming_content("hello without mention", effective, "12345");
assert!(cleaned.is_none());
}
#[test]
fn mention_only_guild_message_with_mention_passes_and_strips() {
// Guild messages that do carry a @mention pass through and have the
// mention tag stripped, consistent with pre-existing behaviour.
let mention_only = true;
let is_dm = false;
let effective = mention_only && !is_dm;
let cleaned = normalize_incoming_content("<@12345> run status", effective, "12345");
assert_eq!(cleaned.as_deref(), Some("run status"));
}
// Message splitting tests
#[test]
+326
View File
@@ -0,0 +1,326 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use serde_json::json;
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::RwLock;
use uuid::Uuid;
/// Deduplication set capacity — evict half of entries when full.
const DEDUP_CAPACITY: usize = 10_000;
/// Mochat customer service channel.
///
/// Integrates with the Mochat open-source customer service platform API
/// for receiving and sending messages through its HTTP endpoints.
pub struct MochatChannel {
api_url: String,
api_token: String,
allowed_users: Vec<String>,
poll_interval_secs: u64,
/// Message deduplication set.
dedup: Arc<RwLock<HashSet<String>>>,
}
impl MochatChannel {
pub fn new(
api_url: String,
api_token: String,
allowed_users: Vec<String>,
poll_interval_secs: u64,
) -> Self {
Self {
api_url: api_url.trim_end_matches('/').to_string(),
api_token,
allowed_users,
poll_interval_secs,
dedup: Arc::new(RwLock::new(HashSet::new())),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.mochat")
}
fn is_user_allowed(&self, user_id: &str) -> bool {
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
}
/// Check and insert message ID for deduplication.
async fn is_duplicate(&self, msg_id: &str) -> bool {
if msg_id.is_empty() {
return false;
}
let mut dedup = self.dedup.write().await;
if dedup.contains(msg_id) {
return true;
}
if dedup.len() >= DEDUP_CAPACITY {
let to_remove: Vec<String> = dedup.iter().take(DEDUP_CAPACITY / 2).cloned().collect();
for key in to_remove {
dedup.remove(&key);
}
}
dedup.insert(msg_id.to_string());
false
}
}
#[async_trait]
impl Channel for MochatChannel {
fn name(&self) -> &str {
"mochat"
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
let body = json!({
"toUserId": message.recipient,
"msgType": "text",
"content": {
"text": message.content,
}
});
let resp = self
.http_client()
.post(format!("{}/api/message/send", self.api_url))
.header("Authorization", format!("Bearer {}", self.api_token))
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let err = resp.text().await.unwrap_or_default();
anyhow::bail!("Mochat send message failed ({status}): {err}");
}
let result: serde_json::Value = resp.json().await?;
let code = result.get("code").and_then(|v| v.as_i64()).unwrap_or(-1);
if code != 0 && code != 200 {
let msg = result
.get("msg")
.or_else(|| result.get("message"))
.and_then(|v| v.as_str())
.unwrap_or("unknown error");
anyhow::bail!("Mochat API error (code={code}): {msg}");
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
tracing::info!("Mochat: starting message poller");
let poll_interval = std::time::Duration::from_secs(self.poll_interval_secs);
let mut last_message_id: Option<String> = None;
loop {
let mut url = format!("{}/api/message/receive", self.api_url);
if let Some(ref id) = last_message_id {
use std::fmt::Write;
let _ = write!(url, "?since_id={id}");
}
match self
.http_client()
.get(&url)
.header("Authorization", format!("Bearer {}", self.api_token))
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
let data: serde_json::Value = match resp.json().await {
Ok(d) => d,
Err(e) => {
tracing::warn!("Mochat: failed to parse response: {e}");
tokio::time::sleep(poll_interval).await;
continue;
}
};
let messages = data
.get("data")
.or_else(|| data.get("messages"))
.and_then(|d| d.as_array());
if let Some(messages) = messages {
for msg in messages {
let msg_id = msg
.get("messageId")
.or_else(|| msg.get("id"))
.and_then(|i| i.as_str())
.unwrap_or("");
if self.is_duplicate(msg_id).await {
continue;
}
let sender = msg
.get("fromUserId")
.or_else(|| msg.get("sender"))
.and_then(|s| s.as_str())
.unwrap_or("unknown");
if !self.is_user_allowed(sender) {
tracing::debug!(
"Mochat: ignoring message from unauthorized user: {sender}"
);
continue;
}
let content = msg
.get("content")
.and_then(|c| {
c.get("text")
.and_then(|t| t.as_str())
.or_else(|| c.as_str())
})
.unwrap_or("")
.trim();
if content.is_empty() {
continue;
}
let channel_msg = ChannelMessage {
id: Uuid::new_v4().to_string(),
sender: sender.to_string(),
reply_target: sender.to_string(),
content: content.to_string(),
channel: "mochat".to_string(),
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
};
if tx.send(channel_msg).await.is_err() {
tracing::warn!("Mochat: message channel closed");
return Ok(());
}
if !msg_id.is_empty() {
last_message_id = Some(msg_id.to_string());
}
}
}
}
Ok(resp) => {
let status = resp.status();
let err = resp.text().await.unwrap_or_default();
tracing::warn!("Mochat: poll request failed ({status}): {err}");
}
Err(e) => {
tracing::warn!("Mochat: poll request error: {e}");
}
}
tokio::time::sleep(poll_interval).await;
}
}
async fn health_check(&self) -> bool {
let resp = self
.http_client()
.get(format!("{}/api/health", self.api_url))
.header("Authorization", format!("Bearer {}", self.api_token))
.send()
.await;
match resp {
Ok(r) => r.status().is_success(),
Err(_) => false,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_name() {
let ch = MochatChannel::new("https://mochat.example.com".into(), "tok".into(), vec![], 5);
assert_eq!(ch.name(), "mochat");
}
#[test]
fn test_api_url_trailing_slash_stripped() {
let ch = MochatChannel::new(
"https://mochat.example.com/".into(),
"tok".into(),
vec![],
5,
);
assert_eq!(ch.api_url, "https://mochat.example.com");
}
#[test]
fn test_user_allowed_wildcard() {
let ch = MochatChannel::new("https://m.test".into(), "tok".into(), vec!["*".into()], 5);
assert!(ch.is_user_allowed("anyone"));
}
#[test]
fn test_user_allowed_specific() {
let ch = MochatChannel::new(
"https://m.test".into(),
"tok".into(),
vec!["user123".into()],
5,
);
assert!(ch.is_user_allowed("user123"));
assert!(!ch.is_user_allowed("other"));
}
#[test]
fn test_user_denied_empty() {
let ch = MochatChannel::new("https://m.test".into(), "tok".into(), vec![], 5);
assert!(!ch.is_user_allowed("anyone"));
}
#[tokio::test]
async fn test_dedup() {
let ch = MochatChannel::new("https://m.test".into(), "tok".into(), vec![], 5);
assert!(!ch.is_duplicate("msg1").await);
assert!(ch.is_duplicate("msg1").await);
assert!(!ch.is_duplicate("msg2").await);
}
#[tokio::test]
async fn test_dedup_empty_id() {
let ch = MochatChannel::new("https://m.test".into(), "tok".into(), vec![], 5);
assert!(!ch.is_duplicate("").await);
assert!(!ch.is_duplicate("").await);
}
#[test]
fn test_config_serde() {
let toml_str = r#"
api_url = "https://mochat.example.com"
api_token = "secret"
allowed_users = ["user1"]
"#;
let config: crate::config::schema::MochatConfig = toml::from_str(toml_str).unwrap();
assert_eq!(config.api_url, "https://mochat.example.com");
assert_eq!(config.api_token, "secret");
assert_eq!(config.allowed_users, vec!["user1"]);
}
#[test]
fn test_config_serde_defaults() {
let toml_str = r#"
api_url = "https://mochat.example.com"
api_token = "secret"
"#;
let config: crate::config::schema::MochatConfig = toml::from_str(toml_str).unwrap();
assert!(config.allowed_users.is_empty());
assert_eq!(config.poll_interval_secs, 5);
}
}
+1285 -73
View File
File diff suppressed because it is too large Load Diff
+211 -32
View File
@@ -62,24 +62,146 @@ impl NextcloudTalkChannel {
/// Parse a Nextcloud Talk webhook payload into channel messages.
///
/// Relevant payload fields:
/// - `type` (accepts `message` or `Create`)
/// - `object.token` (room token for reply routing)
/// - `message.actorType`, `message.actorId`, `message.message`, `message.timestamp`
/// Two payload formats are supported:
///
/// **Format A — legacy/custom** (`type: "message"`):
/// ```json
/// {
/// "type": "message",
/// "object": { "token": "<room>" },
/// "message": { "actorId": "...", "message": "...", ... }
/// }
/// ```
///
/// **Format B — Activity Streams 2.0** (`type: "Create"`):
/// This is the format actually sent by Nextcloud Talk bot webhooks.
/// ```json
/// {
/// "type": "Create",
/// "actor": { "type": "Person", "id": "users/alice", "name": "Alice" },
/// "object": { "type": "Note", "id": "177", "content": "{\"message\":\"hi\",\"parameters\":[]}", "mediaType": "text/markdown" },
/// "target": { "type": "Collection", "id": "<room_token>", "name": "Room Name" }
/// }
/// ```
pub fn parse_webhook_payload(&self, payload: &serde_json::Value) -> Vec<ChannelMessage> {
let messages = Vec::new();
let event_type = match payload.get("type").and_then(|v| v.as_str()) {
Some(t) => t,
None => return messages,
};
// Activity Streams 2.0 format sent by Nextcloud Talk bot webhooks.
if event_type.eq_ignore_ascii_case("create") {
return self.parse_as2_payload(payload);
}
// Legacy/custom format.
if !event_type.eq_ignore_ascii_case("message") {
tracing::debug!("Nextcloud Talk: skipping non-message event: {event_type}");
return messages;
}
self.parse_message_payload(payload)
}
/// Parse Activity Streams 2.0 `Create` payload (real Nextcloud Talk bot webhook format).
fn parse_as2_payload(&self, payload: &serde_json::Value) -> Vec<ChannelMessage> {
let mut messages = Vec::new();
if let Some(event_type) = payload.get("type").and_then(|v| v.as_str()) {
// Nextcloud Talk bot webhooks send "Create" for new chat messages,
// but some setups may use "message". Accept both.
let is_message_event = event_type.eq_ignore_ascii_case("message")
|| event_type.eq_ignore_ascii_case("create");
if !is_message_event {
tracing::debug!("Nextcloud Talk: skipping non-message event: {event_type}");
return messages;
}
let obj = match payload.get("object") {
Some(o) => o,
None => return messages,
};
// Only handle Note objects (= chat messages). Ignore reactions, etc.
let object_type = obj.get("type").and_then(|v| v.as_str()).unwrap_or("");
if !object_type.eq_ignore_ascii_case("note") {
tracing::debug!("Nextcloud Talk: skipping AS2 Create with object.type={object_type}");
return messages;
}
// Room token is in target.id.
let room_token = payload
.get("target")
.and_then(|t| t.get("id"))
.and_then(|v| v.as_str())
.map(str::trim)
.filter(|t| !t.is_empty());
let Some(room_token) = room_token else {
tracing::warn!("Nextcloud Talk: missing target.id (room token) in AS2 payload");
return messages;
};
// Actor — skip bot-originated messages to prevent feedback loops.
let actor = payload.get("actor").cloned().unwrap_or_default();
let actor_type = actor.get("type").and_then(|v| v.as_str()).unwrap_or("");
if actor_type.eq_ignore_ascii_case("application") {
tracing::debug!("Nextcloud Talk: skipping bot-originated AS2 message");
return messages;
}
// actor.id is "users/<id>" — strip the prefix.
let actor_id = actor
.get("id")
.and_then(|v| v.as_str())
.map(|id| id.trim_start_matches("users/").trim())
.filter(|id| !id.is_empty());
let Some(actor_id) = actor_id else {
tracing::warn!("Nextcloud Talk: missing actor.id in AS2 payload");
return messages;
};
if !self.is_user_allowed(actor_id) {
tracing::warn!(
"Nextcloud Talk: ignoring message from unauthorized actor: {actor_id}. \
Add to channels.nextcloud_talk.allowed_users in config.toml, \
or run `zeroclaw onboard --channels-only` to configure interactively."
);
return messages;
}
// Message text is JSON-encoded inside object.content.
// e.g. content = "{\"message\":\"hello\",\"parameters\":[]}"
let content = obj
.get("content")
.and_then(|v| v.as_str())
.and_then(|s| serde_json::from_str::<serde_json::Value>(s).ok())
.and_then(|v| {
v.get("message")
.and_then(|m| m.as_str())
.map(str::trim)
.map(str::to_string)
})
.filter(|s| !s.is_empty());
let Some(content) = content else {
tracing::debug!("Nextcloud Talk: empty or unparseable AS2 message content");
return messages;
};
let message_id =
Self::value_to_string(obj.get("id")).unwrap_or_else(|| Uuid::new_v4().to_string());
messages.push(ChannelMessage {
id: message_id,
reply_target: room_token.to_string(),
sender: actor_id.to_string(),
content,
channel: "nextcloud_talk".to_string(),
timestamp: Self::now_unix_secs(),
thread_ts: None,
});
messages
}
/// Parse legacy `type: "message"` payload format.
fn parse_message_payload(&self, payload: &serde_json::Value) -> Vec<ChannelMessage> {
let mut messages = Vec::new();
let Some(message_obj) = payload.get("message") else {
return messages;
};
@@ -343,33 +465,90 @@ mod tests {
}
#[test]
fn nextcloud_talk_parse_create_event_type() {
let channel = make_channel();
fn nextcloud_talk_parse_as2_create_payload() {
let channel = NextcloudTalkChannel::new(
"https://cloud.example.com".into(),
"app-token".into(),
vec!["*".into()],
);
// Real payload format sent by Nextcloud Talk bot webhooks.
let payload = serde_json::json!({
"type": "Create",
"object": {
"id": "42",
"token": "room-token-123",
"name": "Team Room",
"type": "room"
"actor": {
"type": "Person",
"id": "users/user_a",
"name": "User A",
"talkParticipantType": "1"
},
"message": {
"id": 88,
"token": "room-token-123",
"actorType": "users",
"actorId": "user_a",
"actorDisplayName": "User A",
"timestamp": 1_735_701_300,
"messageType": "comment",
"systemMessage": "",
"message": "Hello via Create event"
"object": {
"type": "Note",
"id": "177",
"name": "message",
"content": "{\"message\":\"hallo, bist du da?\",\"parameters\":[]}",
"mediaType": "text/markdown"
},
"target": {
"type": "Collection",
"id": "room-token-123",
"name": "HOME"
}
});
let messages = channel.parse_webhook_payload(&payload);
assert_eq!(messages.len(), 1);
assert_eq!(messages[0].id, "88");
assert_eq!(messages[0].content, "Hello via Create event");
assert_eq!(messages[0].reply_target, "room-token-123");
assert_eq!(messages[0].sender, "user_a");
assert_eq!(messages[0].content, "hallo, bist du da?");
assert_eq!(messages[0].channel, "nextcloud_talk");
}
#[test]
fn nextcloud_talk_parse_as2_skips_bot_originated() {
let channel = NextcloudTalkChannel::new(
"https://cloud.example.com".into(),
"app-token".into(),
vec!["*".into()],
);
let payload = serde_json::json!({
"type": "Create",
"actor": {
"type": "Application",
"id": "bots/jarvis",
"name": "jarvis"
},
"object": {
"type": "Note",
"id": "178",
"content": "{\"message\":\"I am the bot\",\"parameters\":[]}",
"mediaType": "text/markdown"
},
"target": {
"type": "Collection",
"id": "room-token-123",
"name": "HOME"
}
});
let messages = channel.parse_webhook_payload(&payload);
assert!(messages.is_empty());
}
#[test]
fn nextcloud_talk_parse_as2_skips_non_note_objects() {
let channel = NextcloudTalkChannel::new(
"https://cloud.example.com".into(),
"app-token".into(),
vec!["*".into()],
);
let payload = serde_json::json!({
"type": "Create",
"actor": { "type": "Person", "id": "users/user_a" },
"object": { "type": "Reaction", "id": "5" },
"target": { "type": "Collection", "id": "room-token-123" }
});
let messages = channel.parse_webhook_payload(&payload);
assert!(messages.is_empty());
}
#[test]
+39 -4
View File
@@ -257,8 +257,10 @@ impl Channel for QQChannel {
(
format!("{QQ_API_BASE}/v2/groups/{group_id}/messages"),
json!({
"content": &message.content,
"msg_type": 0,
"markdown": {
"content": &message.content,
},
"msg_type": 2,
}),
)
} else {
@@ -273,8 +275,10 @@ impl Channel for QQChannel {
(
format!("{QQ_API_BASE}/v2/users/{user_id}/messages"),
json!({
"content": &message.content,
"msg_type": 0,
"markdown": {
"content": &message.content,
},
"msg_type": 2,
}),
)
};
@@ -667,4 +671,35 @@ allowed_users = ["user1"]
assert_eq!(compose_message_content(&payload), None);
}
#[test]
fn test_send_body_uses_markdown_msg_type() {
// Verify the expected JSON shape for both group and user send paths.
// msg_type 2 with a nested markdown object is required by the QQ API
// for markdown rendering; msg_type 0 (plain text) causes markdown
// syntax to appear literally in the client.
let content = "**bold** and `code`";
let group_body = json!({
"markdown": { "content": content },
"msg_type": 2,
});
assert_eq!(group_body["msg_type"], 2);
assert_eq!(group_body["markdown"]["content"], content);
assert!(
group_body.get("content").is_none(),
"top-level 'content' must not be present"
);
let user_body = json!({
"markdown": { "content": content },
"msg_type": 2,
});
assert_eq!(user_body["msg_type"], 2);
assert_eq!(user_body["markdown"]["content"], content);
assert!(
user_body.get("content").is_none(),
"top-level 'content' must not be present"
);
}
}
+504
View File
@@ -0,0 +1,504 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use parking_lot::Mutex;
use serde::Deserialize;
use std::time::{Duration, Instant};
/// Reddit channel — polls for mentions, DMs, and comment replies via Reddit OAuth2 API.
pub struct RedditChannel {
client_id: String,
client_secret: String,
refresh_token: String,
username: String,
subreddit: Option<String>,
auth: Mutex<RedditAuth>,
}
struct RedditAuth {
access_token: String,
expires_at: Instant,
}
#[derive(Deserialize)]
struct RedditTokenResponse {
access_token: String,
expires_in: u64,
}
#[derive(Deserialize)]
struct RedditListing {
data: RedditListingData,
}
#[derive(Deserialize)]
struct RedditListingData {
children: Vec<RedditChild>,
}
#[derive(Deserialize)]
struct RedditChild {
data: RedditItemData,
}
#[allow(dead_code)]
#[derive(Deserialize)]
struct RedditItemData {
name: Option<String>,
author: Option<String>,
body: Option<String>,
subject: Option<String>,
parent_id: Option<String>,
link_id: Option<String>,
subreddit: Option<String>,
created_utc: Option<f64>,
new: Option<bool>,
#[serde(rename = "type")]
message_type: Option<String>,
context: Option<String>,
}
const REDDIT_API_BASE: &str = "https://oauth.reddit.com";
const REDDIT_TOKEN_URL: &str = "https://www.reddit.com/api/v1/access_token";
const USER_AGENT: &str = "zeroclaw:channel:v0.1.0 (by /u/zeroclaw-bot)";
/// Reddit enforces 60 requests per minute.
const POLL_INTERVAL: Duration = Duration::from_secs(5);
impl RedditChannel {
pub fn new(
client_id: String,
client_secret: String,
refresh_token: String,
username: String,
subreddit: Option<String>,
) -> Self {
Self {
client_id,
client_secret,
refresh_token,
username,
subreddit,
auth: Mutex::new(RedditAuth {
access_token: String::new(),
expires_at: Instant::now(),
}),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.reddit")
}
/// Refresh the OAuth2 access token using the refresh token.
async fn refresh_access_token(&self) -> Result<()> {
let client = self.http_client();
let resp = client
.post(REDDIT_TOKEN_URL)
.basic_auth(&self.client_id, Some(&self.client_secret))
.header("User-Agent", USER_AGENT)
.form(&[
("grant_type", "refresh_token"),
("refresh_token", &self.refresh_token),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit token refresh failed ({status}): {body}");
}
let token_resp: RedditTokenResponse = resp.json().await?;
let mut auth = self.auth.lock();
auth.access_token = token_resp.access_token;
auth.expires_at =
Instant::now() + Duration::from_secs(token_resp.expires_in.saturating_sub(60));
Ok(())
}
/// Get a valid access token, refreshing if expired.
async fn get_access_token(&self) -> Result<String> {
{
let auth = self.auth.lock();
if !auth.access_token.is_empty() && Instant::now() < auth.expires_at {
return Ok(auth.access_token.clone());
}
}
self.refresh_access_token().await?;
let auth = self.auth.lock();
Ok(auth.access_token.clone())
}
/// Fetch unread inbox items (mentions, DMs, comment replies).
async fn fetch_inbox(&self) -> Result<Vec<RedditChild>> {
let token = self.get_access_token().await?;
let client = self.http_client();
let resp = client
.get(format!("{REDDIT_API_BASE}/message/unread"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.query(&[("limit", "25")])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
tracing::warn!("Reddit inbox fetch failed ({status}): {body}");
return Ok(Vec::new());
}
let listing: RedditListing = resp.json().await?;
Ok(listing.data.children)
}
/// Mark inbox items as read.
async fn mark_read(&self, fullnames: &[String]) -> Result<()> {
if fullnames.is_empty() {
return Ok(());
}
let token = self.get_access_token().await?;
let client = self.http_client();
let ids = fullnames.join(",");
let resp = client
.post(format!("{REDDIT_API_BASE}/api/read_message"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[("id", ids.as_str())])
.send()
.await?;
if !resp.status().is_success() {
tracing::warn!("Reddit mark_read failed: {}", resp.status());
}
Ok(())
}
/// Parse a Reddit inbox item into a ChannelMessage.
fn parse_item(&self, item: &RedditItemData) -> Option<ChannelMessage> {
let author = item.author.as_deref().unwrap_or("");
let body = item.body.as_deref().unwrap_or("");
let name = item.name.as_deref().unwrap_or("");
// Skip messages from ourselves
if author.eq_ignore_ascii_case(&self.username) || author.is_empty() || body.is_empty() {
return None;
}
// If a subreddit filter is set, skip items from other subreddits
if let Some(ref sub) = self.subreddit {
if let Some(ref item_sub) = item.subreddit {
if !item_sub.eq_ignore_ascii_case(sub) {
return None;
}
}
}
// Determine reply target: for comment replies use the parent thing name,
// for DMs reply to the author.
let reply_target =
if item.message_type.as_deref() == Some("comment_reply") || item.parent_id.is_some() {
// For comment replies, the recipient is the parent fullname
item.parent_id.clone().unwrap_or_else(|| name.to_string())
} else {
// For DMs, reply to the author
author.to_string()
};
#[allow(clippy::cast_possible_truncation, clippy::cast_sign_loss)]
let timestamp = item.created_utc.unwrap_or(0.0) as u64;
Some(ChannelMessage {
id: format!("reddit_{name}"),
sender: author.to_string(),
reply_target,
content: body.to_string(),
channel: "reddit".to_string(),
timestamp,
thread_ts: item.parent_id.clone(),
})
}
}
#[async_trait]
impl Channel for RedditChannel {
fn name(&self) -> &str {
"reddit"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let token = self.get_access_token().await?;
let client = self.http_client();
// If recipient looks like a Reddit fullname (t1_, t3_, t4_), it's a comment reply.
// Otherwise treat it as a DM to a username.
if message.recipient.starts_with("t1_")
|| message.recipient.starts_with("t3_")
|| message.recipient.starts_with("t4_")
{
// Comment reply
let resp = client
.post(format!("{REDDIT_API_BASE}/api/comment"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[
("thing_id", message.recipient.as_str()),
("text", &message.content),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit comment reply failed ({status}): {body}");
}
} else {
// Direct message
let subject = message
.subject
.as_deref()
.unwrap_or("Message from ZeroClaw");
let resp = client
.post(format!("{REDDIT_API_BASE}/api/compose"))
.bearer_auth(&token)
.header("User-Agent", USER_AGENT)
.form(&[
("to", message.recipient.as_str()),
("subject", subject),
("text", &message.content),
])
.send()
.await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Reddit DM failed ({status}): {body}");
}
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
// Initial auth
self.refresh_access_token().await?;
tracing::info!(
"Reddit channel listening as u/{} {}...",
self.username,
self.subreddit
.as_ref()
.map(|s| format!("in r/{s}"))
.unwrap_or_default()
);
loop {
tokio::time::sleep(POLL_INTERVAL).await;
let items = match self.fetch_inbox().await {
Ok(items) => items,
Err(e) => {
tracing::warn!("Reddit poll error: {e}");
continue;
}
};
let mut read_ids = Vec::new();
for child in &items {
if let Some(ref name) = child.data.name {
read_ids.push(name.clone());
}
if let Some(msg) = self.parse_item(&child.data) {
if tx.send(msg).await.is_err() {
return Ok(());
}
}
}
if let Err(e) = self.mark_read(&read_ids).await {
tracing::warn!("Reddit mark_read error: {e}");
}
}
}
async fn health_check(&self) -> bool {
self.get_access_token().await.is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> RedditChannel {
RedditChannel::new(
"client_id".into(),
"client_secret".into(),
"refresh_token".into(),
"testbot".into(),
None,
)
}
fn make_channel_with_sub(sub: &str) -> RedditChannel {
RedditChannel::new(
"client_id".into(),
"client_secret".into(),
"refresh_token".into(),
"testbot".into(),
Some(sub.into()),
)
}
#[test]
fn parse_comment_reply() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_abc123".into()),
author: Some("user1".into()),
body: Some("hello bot".into()),
subject: None,
parent_id: Some("t1_parent1".into()),
link_id: Some("t3_post1".into()),
subreddit: Some("rust".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: Some("comment_reply".into()),
context: None,
};
let msg = ch.parse_item(&item).unwrap();
assert_eq!(msg.sender, "user1");
assert_eq!(msg.content, "hello bot");
assert_eq!(msg.reply_target, "t1_parent1");
assert_eq!(msg.channel, "reddit");
assert_eq!(msg.id, "reddit_t1_abc123");
}
#[test]
fn parse_dm() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t4_dm456".into()),
author: Some("user2".into()),
body: Some("private message".into()),
subject: Some("Hello".into()),
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_100.0),
new: Some(true),
message_type: None,
context: None,
};
let msg = ch.parse_item(&item).unwrap();
assert_eq!(msg.sender, "user2");
assert_eq!(msg.content, "private message");
assert_eq!(msg.reply_target, "user2"); // DM reply goes to author
}
#[test]
fn skip_self_messages() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_self".into()),
author: Some("testbot".into()),
body: Some("my own message".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
}
#[test]
fn skip_empty_body() {
let ch = make_channel();
let item = RedditItemData {
name: Some("t1_empty".into()),
author: Some("user1".into()),
body: Some(String::new()),
subject: None,
parent_id: None,
link_id: None,
subreddit: None,
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
}
#[test]
fn subreddit_filter() {
let ch = make_channel_with_sub("rust");
let item = RedditItemData {
name: Some("t1_other".into()),
author: Some("user1".into()),
body: Some("hello".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: Some("python".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&item).is_none());
let matching_item = RedditItemData {
name: Some("t1_match".into()),
author: Some("user1".into()),
body: Some("hello".into()),
subject: None,
parent_id: None,
link_id: None,
subreddit: Some("rust".into()),
created_utc: Some(1_700_000_000.0),
new: Some(true),
message_type: None,
context: None,
};
assert!(ch.parse_item(&matching_item).is_some());
}
#[test]
fn send_message_formatting() {
// Verify SendMessage can be constructed for both DM and comment reply
let dm = SendMessage::new("hello", "user1");
assert_eq!(dm.recipient, "user1");
assert_eq!(dm.content, "hello");
let reply = SendMessage::new("response", "t1_abc123");
assert!(reply.recipient.starts_with("t1_"));
}
}
+5
View File
@@ -76,6 +76,11 @@ pub trait SessionBackend: Send + Sync {
fn search(&self, _query: &SessionQuery) -> Vec<SessionMetadata> {
Vec::new()
}
/// Delete all messages for a session. Returns `true` if the session existed.
fn delete_session(&self, _session_key: &str) -> std::io::Result<bool> {
Ok(false)
}
}
#[cfg(test)]
+55
View File
@@ -288,6 +288,39 @@ impl SessionBackend for SqliteSessionBackend {
Ok(count)
}
fn delete_session(&self, session_key: &str) -> std::io::Result<bool> {
let conn = self.conn.lock();
// Check if session exists
let exists: bool = conn
.query_row(
"SELECT COUNT(*) > 0 FROM session_metadata WHERE session_key = ?1",
params![session_key],
|row| row.get(0),
)
.unwrap_or(false);
if !exists {
return Ok(false);
}
// Delete messages (FTS5 trigger handles sessions_fts cleanup)
conn.execute(
"DELETE FROM sessions WHERE session_key = ?1",
params![session_key],
)
.map_err(std::io::Error::other)?;
// Delete metadata
conn.execute(
"DELETE FROM session_metadata WHERE session_key = ?1",
params![session_key],
)
.map_err(std::io::Error::other)?;
Ok(true)
}
fn search(&self, query: &SessionQuery) -> Vec<SessionMetadata> {
let Some(keyword) = &query.keyword else {
return self.list_sessions_with_metadata();
@@ -473,6 +506,28 @@ mod tests {
assert_eq!(sessions[0], "new_session");
}
#[test]
fn delete_session_removes_all_data() {
let tmp = TempDir::new().unwrap();
let backend = SqliteSessionBackend::new(tmp.path()).unwrap();
backend.append("s1", &ChatMessage::user("hello")).unwrap();
backend.append("s1", &ChatMessage::assistant("hi")).unwrap();
backend.append("s2", &ChatMessage::user("other")).unwrap();
assert!(backend.delete_session("s1").unwrap());
assert!(backend.load("s1").is_empty());
assert_eq!(backend.list_sessions().len(), 1);
assert_eq!(backend.list_sessions()[0], "s2");
}
#[test]
fn delete_session_returns_false_for_missing() {
let tmp = TempDir::new().unwrap();
let backend = SqliteSessionBackend::new(tmp.path()).unwrap();
assert!(!backend.delete_session("nonexistent").unwrap());
}
#[test]
fn migrate_from_jsonl_imports_and_renames() {
let tmp = TempDir::new().unwrap();
+37 -1
View File
@@ -25,6 +25,7 @@ pub struct SlackChannel {
channel_id: Option<String>,
channel_ids: Vec<String>,
allowed_users: Vec<String>,
thread_replies: bool,
mention_only: bool,
group_reply_allowed_sender_ids: Vec<String>,
user_display_name_cache: Mutex<HashMap<String, CachedSlackDisplayName>>,
@@ -75,6 +76,7 @@ impl SlackChannel {
channel_id,
channel_ids,
allowed_users,
thread_replies: true,
mention_only: false,
group_reply_allowed_sender_ids: Vec::new(),
user_display_name_cache: Mutex::new(HashMap::new()),
@@ -94,6 +96,12 @@ impl SlackChannel {
self
}
/// Configure whether outbound replies stay in the originating Slack thread.
pub fn with_thread_replies(mut self, thread_replies: bool) -> Self {
self.thread_replies = thread_replies;
self
}
/// Configure workspace directory used for persisting inbound Slack attachments.
pub fn with_workspace_dir(mut self, dir: PathBuf) -> Self {
self.workspace_dir = Some(dir);
@@ -122,6 +130,14 @@ impl SlackChannel {
.any(|entry| entry == "*" || entry == user_id)
}
fn outbound_thread_ts<'a>(&self, message: &'a SendMessage) -> Option<&'a str> {
if self.thread_replies {
message.thread_ts.as_deref()
} else {
None
}
}
/// Get the bot's own user ID so we can ignore our own messages
async fn get_bot_user_id(&self) -> Option<String> {
let resp: serde_json::Value = self
@@ -2149,7 +2165,7 @@ impl Channel for SlackChannel {
"text": message.content
});
if let Some(ref ts) = message.thread_ts {
if let Some(ts) = self.outbound_thread_ts(message) {
body["thread_ts"] = serde_json::json!(ts);
}
@@ -2484,10 +2500,30 @@ mod tests {
#[test]
fn slack_group_reply_policy_defaults_to_all_messages() {
let ch = SlackChannel::new("xoxb-fake".into(), None, None, vec![], vec!["*".into()]);
assert!(ch.thread_replies);
assert!(!ch.mention_only);
assert!(ch.group_reply_allowed_sender_ids.is_empty());
}
#[test]
fn with_thread_replies_sets_flag() {
let ch = SlackChannel::new("xoxb-fake".into(), None, None, vec![], vec![])
.with_thread_replies(false);
assert!(!ch.thread_replies);
}
#[test]
fn outbound_thread_ts_respects_thread_replies_setting() {
let msg = SendMessage::new("hello", "C123").in_thread(Some("1741234567.100001".into()));
let threaded = SlackChannel::new("xoxb-fake".into(), None, None, vec![], vec![]);
assert_eq!(threaded.outbound_thread_ts(&msg), Some("1741234567.100001"));
let channel_root = SlackChannel::new("xoxb-fake".into(), None, None, vec![], vec![])
.with_thread_replies(false);
assert_eq!(channel_root.outbound_thread_ts(&msg), None);
}
#[test]
fn with_workspace_dir_sets_field() {
let ch = SlackChannel::new("xoxb-fake".into(), None, None, vec![], vec![])
+254 -18
View File
@@ -332,6 +332,18 @@ pub struct TelegramChannel {
transcription: Option<crate::config::TranscriptionConfig>,
voice_transcriptions: Mutex<std::collections::HashMap<String, String>>,
workspace_dir: Option<std::path::PathBuf>,
ack_reactions: bool,
tts_config: Option<crate::config::TtsConfig>,
voice_chats: Arc<std::sync::Mutex<std::collections::HashSet<String>>>,
pending_voice:
Arc<std::sync::Mutex<std::collections::HashMap<String, (String, std::time::Instant)>>>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum EditMessageResult {
Success,
NotModified,
Failed(reqwest::StatusCode),
}
impl TelegramChannel {
@@ -363,9 +375,19 @@ impl TelegramChannel {
transcription: None,
voice_transcriptions: Mutex::new(std::collections::HashMap::new()),
workspace_dir: None,
ack_reactions: true,
tts_config: None,
voice_chats: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
pending_voice: Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
}
}
/// Configure whether Telegram-native acknowledgement reactions are sent.
pub fn with_ack_reactions(mut self, enabled: bool) -> Self {
self.ack_reactions = enabled;
self
}
/// Configure workspace directory for saving downloaded attachments.
pub fn with_workspace_dir(mut self, dir: std::path::PathBuf) -> Self {
self.workspace_dir = Some(dir);
@@ -398,6 +420,14 @@ impl TelegramChannel {
self
}
/// Configure text-to-speech for outgoing voice replies.
pub fn with_tts(mut self, config: crate::config::TtsConfig) -> Self {
if config.enabled {
self.tts_config = Some(config);
}
self
}
/// Parse reply_target into (chat_id, optional thread_id).
fn parse_reply_target(reply_target: &str) -> (String, Option<String>) {
if let Some((chat_id, thread_id)) = reply_target.split_once(':') {
@@ -540,6 +570,65 @@ impl TelegramChannel {
format!("{}/bot{}/{method}", self.api_base, self.bot_token)
}
/// Synthesize text to speech and send as a Telegram voice note (static version for spawned tasks).
async fn synthesize_and_send_voice(
api_base: &str,
bot_token: &str,
chat_id: &str,
thread_id: Option<&str>,
text: &str,
tts_config: &crate::config::TtsConfig,
) -> anyhow::Result<()> {
let tts_manager = super::tts::TtsManager::new(tts_config)?;
let audio_bytes = tts_manager.synthesize(text).await?;
let audio_len = audio_bytes.len();
tracing::info!("Telegram TTS: synthesized {audio_len} bytes of audio");
if audio_bytes.is_empty() {
anyhow::bail!("TTS returned empty audio");
}
let url = format!("{api_base}/bot{bot_token}/sendVoice");
let client = crate::config::build_runtime_proxy_client("channel.telegram");
let mut form = reqwest::multipart::Form::new()
.text("chat_id", chat_id.to_string())
.part(
"voice",
reqwest::multipart::Part::bytes(audio_bytes)
.file_name("voice.ogg")
.mime_str("audio/ogg")?,
);
if let Some(tid) = thread_id {
form = form.text("message_thread_id", tid.to_string());
}
let resp = client.post(&url).multipart(form).send().await?;
if !resp.status().is_success() {
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
anyhow::bail!("sendVoice failed: status={status}, body={body}");
}
tracing::info!("Telegram TTS: sent voice note ({audio_len} bytes)");
Ok(())
}
async fn classify_edit_message_response(resp: reqwest::Response) -> EditMessageResult {
if resp.status().is_success() {
return EditMessageResult::Success;
}
let status = resp.status();
let body = resp.text().await.unwrap_or_default();
if body.contains("message is not modified") {
return EditMessageResult::NotModified;
}
EditMessageResult::Failed(status)
}
async fn fetch_bot_username(&self) -> anyhow::Result<String> {
let resp = self.http_client().get(self.api_url("getMe")).send().await?;
@@ -730,7 +819,7 @@ impl TelegramChannel {
if let Some(identity) = bind_identity {
self.add_allowed_identity_runtime(&identity);
match self.persist_allowed_identity(&identity).await {
match Box::pin(self.persist_allowed_identity(&identity)).await {
Ok(()) => {
let _ = self
.send(&SendMessage::new(
@@ -1144,6 +1233,11 @@ Allowlist Telegram username (without '@') or numeric user ID.",
return None;
}
// Enter voice-chat mode so outgoing replies get a TTS voice note
if let Ok(mut vc) = self.voice_chats.lock() {
vc.insert(reply_target.clone());
}
// Cache transcription for reply-context lookups
{
let mut cache = self.voice_transcriptions.lock();
@@ -1315,6 +1409,11 @@ Allowlist Telegram username (without '@') or numeric user ID.",
content
};
// Exit voice-chat mode when user switches back to typing
if let Ok(mut vc) = self.voice_chats.lock() {
vc.remove(&reply_target);
}
Some(ChannelMessage {
id: format!("telegram_{chat_id}_{message_id}"),
sender: sender_identity,
@@ -2374,11 +2473,17 @@ impl Channel for TelegramChannel {
.send()
.await?;
if resp.status().is_success() {
return Ok(());
match Self::classify_edit_message_response(resp).await {
EditMessageResult::Success | EditMessageResult::NotModified => return Ok(()),
EditMessageResult::Failed(status) => {
tracing::debug!(
status = ?status,
"Telegram finalize_draft HTML edit failed; retrying without parse_mode"
);
}
}
// Markdown failed — retry without parse_mode
// HTML failed — retry without parse_mode
let plain_body = serde_json::json!({
"chat_id": chat_id,
"message_id": id,
@@ -2392,14 +2497,45 @@ impl Channel for TelegramChannel {
.send()
.await?;
if resp.status().is_success() {
return Ok(());
match Self::classify_edit_message_response(resp).await {
EditMessageResult::Success | EditMessageResult::NotModified => return Ok(()),
EditMessageResult::Failed(status) => {
tracing::warn!(
status = ?status,
"Telegram finalize_draft plain edit failed; attempting delete+send fallback"
);
}
}
// Edit failed entirely — fall back to new message
tracing::warn!("Telegram finalize_draft edit failed; falling back to sendMessage");
self.send_text_chunks(text, &chat_id, thread_id.as_deref())
.await
let delete_resp = self
.client
.post(self.api_url("deleteMessage"))
.json(&serde_json::json!({
"chat_id": chat_id,
"message_id": id,
}))
.send()
.await;
match delete_resp {
Ok(resp) if resp.status().is_success() => {
self.send_text_chunks(text, &chat_id, thread_id.as_deref())
.await
}
Ok(resp) => {
tracing::warn!(
status = ?resp.status(),
"Telegram finalize_draft delete failed; skipping sendMessage to avoid duplicate"
);
Ok(())
}
Err(err) => {
tracing::warn!(
"Telegram finalize_draft delete request failed: {err}; skipping sendMessage to avoid duplicate"
);
Ok(())
}
}
}
async fn cancel_draft(&self, recipient: &str, message_id: &str) -> anyhow::Result<()> {
@@ -2443,6 +2579,84 @@ impl Channel for TelegramChannel {
None => (message.recipient.as_str(), None),
};
// Voice chat mode: send text normally AND queue a voice note of the
// final answer. Text in → text out. Voice in → text + voice out.
let is_voice_chat = self
.voice_chats
.lock()
.map(|vs| vs.contains(&message.recipient))
.unwrap_or(false);
if is_voice_chat && self.tts_config.is_some() {
// Only queue substantive natural-language replies for voice.
// Skip tool outputs: URLs, JSON, code blocks, errors, short status.
let is_substantive = content.len() > 40
&& !content.starts_with("http")
&& !content.starts_with('{')
&& !content.starts_with('[')
&& !content.starts_with("Error")
&& !content.contains("```")
&& !content.contains("tool_call")
&& !content.contains("wttr.in");
if is_substantive {
if let Ok(mut pv) = self.pending_voice.lock() {
pv.insert(
message.recipient.clone(),
(content.clone(), std::time::Instant::now()),
);
}
let pending = self.pending_voice.clone();
let voice_chats = self.voice_chats.clone();
let api_base = self.api_base.clone();
let bot_token = self.bot_token.clone();
let chat_id_owned = chat_id.to_string();
let thread_id_owned = thread_id.map(str::to_string);
let recipient = message.recipient.clone();
let tts_config = self.tts_config.clone().unwrap();
tokio::spawn(async move {
// Wait 10 seconds — long enough for the agent to finish its
// full tool chain and send the final answer.
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
// Atomic check-and-remove: only one task gets the value
let to_voice = pending.lock().ok().and_then(|mut pv| {
if let Some((_, ts)) = pv.get(&recipient) {
if ts.elapsed().as_secs() >= 8 {
return pv.remove(&recipient).map(|(text, _)| text);
}
}
None
});
if let Some(text) = to_voice {
if let Ok(mut vc) = voice_chats.lock() {
vc.remove(&recipient);
}
match Self::synthesize_and_send_voice(
&api_base,
&bot_token,
&chat_id_owned,
thread_id_owned.as_deref(),
&text,
&tts_config,
)
.await
{
Ok(()) => {
tracing::info!("Telegram: voice reply sent ({} chars)", text.len());
}
Err(e) => {
tracing::warn!("Telegram: TTS voice reply failed: {e}");
}
}
}
});
}
}
// Always send text reply (voice chat gets both text and voice)
let (text_without_markers, attachments) = parse_attachment_markers(&content);
if !attachments.is_empty() {
@@ -2627,17 +2841,19 @@ Ensure only one `zeroclaw` process is using this bot token."
} else if let Some(m) = self.try_parse_attachment_message(update).await {
m
} else {
self.handle_unauthorized_message(update).await;
Box::pin(self.handle_unauthorized_message(update)).await;
continue;
};
if let Some((reaction_chat_id, reaction_message_id)) =
Self::extract_update_message_target(update)
{
self.try_add_ack_reaction_nonblocking(
reaction_chat_id,
reaction_message_id,
);
if self.ack_reactions {
if let Some((reaction_chat_id, reaction_message_id)) =
Self::extract_update_message_target(update)
{
self.try_add_ack_reaction_nonblocking(
reaction_chat_id,
reaction_message_id,
);
}
}
// Send "typing" indicator immediately when we receive a message
@@ -4623,4 +4839,24 @@ mod tests {
// the agent loop will return ProviderCapabilityError before calling
// the provider, and the channel will send "⚠️ Error: ..." to the user.
}
#[test]
fn ack_reactions_defaults_to_true() {
let ch = TelegramChannel::new("token".into(), vec!["*".into()], false);
assert!(ch.ack_reactions);
}
#[test]
fn with_ack_reactions_false_disables_reactions() {
let ch =
TelegramChannel::new("token".into(), vec!["*".into()], false).with_ack_reactions(false);
assert!(!ch.ack_reactions);
}
#[test]
fn with_ack_reactions_true_keeps_reactions() {
let ch =
TelegramChannel::new("token".into(), vec!["*".into()], false).with_ack_reactions(true);
assert!(ch.ack_reactions);
}
}
+802 -32
View File
@@ -1,11 +1,19 @@
use std::collections::HashMap;
use anyhow::{bail, Context, Result};
use async_trait::async_trait;
use reqwest::multipart::{Form, Part};
use crate::config::TranscriptionConfig;
/// Maximum upload size accepted by the Groq Whisper API (25 MB).
/// Maximum upload size accepted by most Whisper-compatible APIs (25 MB).
const MAX_AUDIO_BYTES: usize = 25 * 1024 * 1024;
/// Request timeout for transcription API calls (seconds).
const TRANSCRIPTION_TIMEOUT_SECS: u64 = 120;
// ── Audio utilities ─────────────────────────────────────────────
/// Map file extension to MIME type for Whisper-compatible transcription APIs.
fn mime_for_audio(extension: &str) -> Option<&'static str> {
match extension.to_ascii_lowercase().as_str() {
@@ -31,16 +39,51 @@ fn normalize_audio_filename(file_name: &str) -> String {
}
}
/// Transcribe audio bytes via a Whisper-compatible transcription API.
/// Resolve the API key for voice transcription.
///
/// Returns the transcribed text on success. Requires `GROQ_API_KEY` in the
/// environment. The caller is responsible for enforcing duration limits
/// *before* downloading the file; this function enforces the byte-size cap.
pub async fn transcribe_audio(
audio_data: Vec<u8>,
file_name: &str,
config: &TranscriptionConfig,
) -> Result<String> {
/// Priority order:
/// 1. Explicit `config.api_key` (if set and non-empty).
/// 2. Provider-specific env var based on `api_url`:
/// - URL contains "openai.com" -> `OPENAI_API_KEY`
/// - URL contains "groq.com" -> `GROQ_API_KEY`
/// 3. Fallback chain: `TRANSCRIPTION_API_KEY` -> `GROQ_API_KEY` -> `OPENAI_API_KEY`.
fn resolve_transcription_api_key(config: &TranscriptionConfig) -> Result<String> {
// 1. Explicit config key
if let Some(ref key) = config.api_key {
let trimmed = key.trim();
if !trimmed.is_empty() {
return Ok(trimmed.to_string());
}
}
// 2. Provider-specific env var based on API URL
if config.api_url.contains("openai.com") {
if let Ok(key) = std::env::var("OPENAI_API_KEY") {
return Ok(key);
}
} else if config.api_url.contains("groq.com") {
if let Ok(key) = std::env::var("GROQ_API_KEY") {
return Ok(key);
}
}
// 3. Fallback chain
for var in ["TRANSCRIPTION_API_KEY", "GROQ_API_KEY", "OPENAI_API_KEY"] {
if let Ok(key) = std::env::var(var) {
return Ok(key);
}
}
bail!(
"No API key found for voice transcription — set one of: \
transcription.api_key in config, TRANSCRIPTION_API_KEY, GROQ_API_KEY, or OPENAI_API_KEY"
);
}
/// Validate audio data and resolve MIME type from file name.
///
/// Returns `(normalized_filename, mime_type)` on success.
fn validate_audio(audio_data: &[u8], file_name: &str) -> Result<(String, &'static str)> {
if audio_data.len() > MAX_AUDIO_BYTES {
bail!(
"Audio file too large ({} bytes, max {MAX_AUDIO_BYTES})",
@@ -59,33 +102,494 @@ pub async fn transcribe_audio(
)
})?;
let api_key = std::env::var("GROQ_API_KEY").context(
"GROQ_API_KEY environment variable is not set — required for voice transcription",
)?;
Ok((normalized_name, mime))
}
let client = crate::config::build_runtime_proxy_client("transcription.groq");
// ── TranscriptionProvider trait ─────────────────────────────────
let file_part = Part::bytes(audio_data)
.file_name(normalized_name)
.mime_str(mime)?;
/// Trait for speech-to-text provider implementations.
#[async_trait]
pub trait TranscriptionProvider: Send + Sync {
/// Human-readable provider name (e.g. "groq", "openai").
fn name(&self) -> &str;
let mut form = Form::new()
.part("file", file_part)
.text("model", config.model.clone())
.text("response_format", "json");
/// Transcribe raw audio bytes. `file_name` includes the extension for
/// format detection (e.g. "voice.ogg").
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String>;
if let Some(ref lang) = config.language {
form = form.text("language", lang.clone());
/// List of supported audio file extensions.
fn supported_formats(&self) -> Vec<String> {
vec![
"flac", "mp3", "mpeg", "mpga", "mp4", "m4a", "ogg", "oga", "opus", "wav", "webm",
]
.into_iter()
.map(String::from)
.collect()
}
}
// ── GroqProvider ────────────────────────────────────────────────
/// Groq Whisper API provider (default, backward-compatible with existing config).
pub struct GroqProvider {
api_url: String,
model: String,
api_key: String,
language: Option<String>,
}
impl GroqProvider {
/// Build from the existing `TranscriptionConfig` fields.
///
/// Credential resolution order:
/// 1. `config.api_key`
/// 2. `GROQ_API_KEY` environment variable (backward compatibility)
pub fn from_config(config: &TranscriptionConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.or_else(|| {
std::env::var("GROQ_API_KEY")
.ok()
.map(|v| v.trim().to_string())
.filter(|v| !v.is_empty())
})
.context(
"Missing transcription API key: set [transcription].api_key or GROQ_API_KEY environment variable",
)?;
Ok(Self {
api_url: config.api_url.clone(),
model: config.model.clone(),
api_key,
language: config.language.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for GroqProvider {
fn name(&self) -> &str {
"groq"
}
let resp = client
.post(&config.api_url)
.bearer_auth(&api_key)
.multipart(form)
.send()
.await
.context("Failed to send transcription request")?;
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.groq");
let file_part = Part::bytes(audio_data.to_vec())
.file_name(normalized_name)
.mime_str(mime)?;
let mut form = Form::new()
.part("file", file_part)
.text("model", self.model.clone())
.text("response_format", "json");
if let Some(ref lang) = self.language {
form = form.text("language", lang.clone());
}
let resp = client
.post(&self.api_url)
.bearer_auth(&self.api_key)
.multipart(form)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Groq")?;
parse_whisper_response(resp).await
}
}
// ── OpenAiWhisperProvider ───────────────────────────────────────
/// OpenAI Whisper API provider.
pub struct OpenAiWhisperProvider {
api_key: String,
model: String,
}
impl OpenAiWhisperProvider {
pub fn from_config(config: &crate::config::OpenAiSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing OpenAI STT API key: set [transcription.openai].api_key")?;
Ok(Self {
api_key,
model: config.model.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for OpenAiWhisperProvider {
fn name(&self) -> &str {
"openai"
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.openai");
let file_part = Part::bytes(audio_data.to_vec())
.file_name(normalized_name)
.mime_str(mime)?;
let form = Form::new()
.part("file", file_part)
.text("model", self.model.clone())
.text("response_format", "json");
let resp = client
.post("https://api.openai.com/v1/audio/transcriptions")
.bearer_auth(&self.api_key)
.multipart(form)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to OpenAI")?;
parse_whisper_response(resp).await
}
}
// ── DeepgramProvider ────────────────────────────────────────────
/// Deepgram STT API provider.
pub struct DeepgramProvider {
api_key: String,
model: String,
}
impl DeepgramProvider {
pub fn from_config(config: &crate::config::DeepgramSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing Deepgram API key: set [transcription.deepgram].api_key")?;
Ok(Self {
api_key,
model: config.model.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for DeepgramProvider {
fn name(&self) -> &str {
"deepgram"
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (_, mime) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.deepgram");
let url = format!(
"https://api.deepgram.com/v1/listen?model={}&punctuate=true",
self.model
);
let resp = client
.post(&url)
.header("Authorization", format!("Token {}", self.api_key))
.header("Content-Type", mime)
.body(audio_data.to_vec())
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Deepgram")?;
let status = resp.status();
let body: serde_json::Value = resp
.json()
.await
.context("Failed to parse Deepgram response")?;
if !status.is_success() {
let error_msg = body["err_msg"]
.as_str()
.or_else(|| body["error"].as_str())
.unwrap_or("unknown error");
bail!("Deepgram API error ({}): {}", status, error_msg);
}
let text = body["results"]["channels"][0]["alternatives"][0]["transcript"]
.as_str()
.context("Deepgram response missing transcript field")?
.to_string();
Ok(text)
}
}
// ── AssemblyAiProvider ──────────────────────────────────────────
/// AssemblyAI STT API provider.
pub struct AssemblyAiProvider {
api_key: String,
}
impl AssemblyAiProvider {
pub fn from_config(config: &crate::config::AssemblyAiSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing AssemblyAI API key: set [transcription.assemblyai].api_key")?;
Ok(Self { api_key })
}
}
#[async_trait]
impl TranscriptionProvider for AssemblyAiProvider {
fn name(&self) -> &str {
"assemblyai"
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (_, _) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.assemblyai");
// Step 1: Upload the audio file.
let upload_resp = client
.post("https://api.assemblyai.com/v2/upload")
.header("Authorization", &self.api_key)
.header("Content-Type", "application/octet-stream")
.body(audio_data.to_vec())
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to upload audio to AssemblyAI")?;
let upload_status = upload_resp.status();
let upload_body: serde_json::Value = upload_resp
.json()
.await
.context("Failed to parse AssemblyAI upload response")?;
if !upload_status.is_success() {
let error_msg = upload_body["error"].as_str().unwrap_or("unknown error");
bail!("AssemblyAI upload error ({}): {}", upload_status, error_msg);
}
let upload_url = upload_body["upload_url"]
.as_str()
.context("AssemblyAI upload response missing 'upload_url'")?;
// Step 2: Create transcription job.
let transcript_req = serde_json::json!({
"audio_url": upload_url,
});
let create_resp = client
.post("https://api.assemblyai.com/v2/transcript")
.header("Authorization", &self.api_key)
.json(&transcript_req)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to create AssemblyAI transcription")?;
let create_status = create_resp.status();
let create_body: serde_json::Value = create_resp
.json()
.await
.context("Failed to parse AssemblyAI create response")?;
if !create_status.is_success() {
let error_msg = create_body["error"].as_str().unwrap_or("unknown error");
bail!(
"AssemblyAI transcription error ({}): {}",
create_status,
error_msg
);
}
let transcript_id = create_body["id"]
.as_str()
.context("AssemblyAI response missing 'id'")?;
// Step 3: Poll for completion.
let poll_url = format!("https://api.assemblyai.com/v2/transcript/{transcript_id}");
let poll_interval = std::time::Duration::from_secs(3);
let poll_deadline = tokio::time::Instant::now() + std::time::Duration::from_secs(180);
while tokio::time::Instant::now() < poll_deadline {
tokio::time::sleep(poll_interval).await;
let poll_resp = client
.get(&poll_url)
.header("Authorization", &self.api_key)
.timeout(std::time::Duration::from_secs(30))
.send()
.await
.context("Failed to poll AssemblyAI transcription")?;
let poll_status = poll_resp.status();
let poll_body: serde_json::Value = poll_resp
.json()
.await
.context("Failed to parse AssemblyAI poll response")?;
if !poll_status.is_success() {
let error_msg = poll_body["error"].as_str().unwrap_or("unknown poll error");
bail!("AssemblyAI poll error ({}): {}", poll_status, error_msg);
}
let status_str = poll_body["status"].as_str().unwrap_or("unknown");
match status_str {
"completed" => {
let text = poll_body["text"]
.as_str()
.context("AssemblyAI response missing 'text'")?
.to_string();
return Ok(text);
}
"error" => {
let error_msg = poll_body["error"]
.as_str()
.unwrap_or("unknown transcription error");
bail!("AssemblyAI transcription failed: {}", error_msg);
}
_ => {}
}
}
bail!("AssemblyAI transcription timed out after 180s")
}
}
// ── GoogleSttProvider ───────────────────────────────────────────
/// Google Cloud Speech-to-Text API provider.
pub struct GoogleSttProvider {
api_key: String,
language_code: String,
}
impl GoogleSttProvider {
pub fn from_config(config: &crate::config::GoogleSttConfig) -> Result<Self> {
let api_key = config
.api_key
.as_deref()
.map(str::trim)
.filter(|v| !v.is_empty())
.map(ToOwned::to_owned)
.context("Missing Google STT API key: set [transcription.google].api_key")?;
Ok(Self {
api_key,
language_code: config.language_code.clone(),
})
}
}
#[async_trait]
impl TranscriptionProvider for GoogleSttProvider {
fn name(&self) -> &str {
"google"
}
fn supported_formats(&self) -> Vec<String> {
// Google Cloud STT supports a subset of formats.
vec!["flac", "wav", "ogg", "opus", "mp3", "webm"]
.into_iter()
.map(String::from)
.collect()
}
async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
let (normalized_name, _) = validate_audio(audio_data, file_name)?;
let client = crate::config::build_runtime_proxy_client("transcription.google");
let encoding = match normalized_name
.rsplit_once('.')
.map(|(_, e)| e.to_ascii_lowercase())
.as_deref()
{
Some("flac") => "FLAC",
Some("wav") => "LINEAR16",
Some("ogg" | "opus") => "OGG_OPUS",
Some("mp3") => "MP3",
Some("webm") => "WEBM_OPUS",
Some(ext) => bail!("Google STT does not support '.{ext}' input"),
None => bail!("Google STT requires a file extension"),
};
let audio_content =
base64::Engine::encode(&base64::engine::general_purpose::STANDARD, audio_data);
let request_body = serde_json::json!({
"config": {
"encoding": encoding,
"languageCode": &self.language_code,
"enableAutomaticPunctuation": true,
},
"audio": {
"content": audio_content,
}
});
let url = format!(
"https://speech.googleapis.com/v1/speech:recognize?key={}",
self.api_key
);
let resp = client
.post(&url)
.json(&request_body)
.timeout(std::time::Duration::from_secs(TRANSCRIPTION_TIMEOUT_SECS))
.send()
.await
.context("Failed to send transcription request to Google STT")?;
let status = resp.status();
let body: serde_json::Value = resp
.json()
.await
.context("Failed to parse Google STT response")?;
if !status.is_success() {
let error_msg = body["error"]["message"].as_str().unwrap_or("unknown error");
bail!("Google STT API error ({}): {}", status, error_msg);
}
let text = body["results"][0]["alternatives"][0]["transcript"]
.as_str()
.unwrap_or("")
.to_string();
Ok(text)
}
}
// ── Shared response parsing ─────────────────────────────────────
/// Parse a standard Whisper-compatible JSON response (`{ "text": "..." }`).
async fn parse_whisper_response(resp: reqwest::Response) -> Result<String> {
let status = resp.status();
let body: serde_json::Value = resp
.json()
@@ -105,6 +609,128 @@ pub async fn transcribe_audio(
Ok(text)
}
// ── TranscriptionManager ────────────────────────────────────────
/// Manages multiple STT providers and routes transcription requests.
pub struct TranscriptionManager {
providers: HashMap<String, Box<dyn TranscriptionProvider>>,
default_provider: String,
}
impl TranscriptionManager {
/// Build a `TranscriptionManager` from config.
///
/// Always attempts to register the Groq provider from existing config fields.
/// Additional providers are registered when their config sections are present.
///
/// Provider keys with missing API keys are silently skipped — the error
/// surfaces at transcribe-time so callers that target a different default
/// provider are not blocked.
pub fn new(config: &TranscriptionConfig) -> Result<Self> {
let mut providers: HashMap<String, Box<dyn TranscriptionProvider>> = HashMap::new();
if let Ok(groq) = GroqProvider::from_config(config) {
providers.insert("groq".to_string(), Box::new(groq));
}
if let Some(ref openai_cfg) = config.openai {
if let Ok(p) = OpenAiWhisperProvider::from_config(openai_cfg) {
providers.insert("openai".to_string(), Box::new(p));
}
}
if let Some(ref deepgram_cfg) = config.deepgram {
if let Ok(p) = DeepgramProvider::from_config(deepgram_cfg) {
providers.insert("deepgram".to_string(), Box::new(p));
}
}
if let Some(ref assemblyai_cfg) = config.assemblyai {
if let Ok(p) = AssemblyAiProvider::from_config(assemblyai_cfg) {
providers.insert("assemblyai".to_string(), Box::new(p));
}
}
if let Some(ref google_cfg) = config.google {
if let Ok(p) = GoogleSttProvider::from_config(google_cfg) {
providers.insert("google".to_string(), Box::new(p));
}
}
let default_provider = config.default_provider.clone();
if config.enabled && !providers.contains_key(&default_provider) {
let available: Vec<&str> = providers.keys().map(|k| k.as_str()).collect();
bail!(
"Default transcription provider '{}' is not configured. Available: {available:?}",
default_provider
);
}
Ok(Self {
providers,
default_provider,
})
}
/// Transcribe audio using the default provider.
pub async fn transcribe(&self, audio_data: &[u8], file_name: &str) -> Result<String> {
self.transcribe_with_provider(audio_data, file_name, &self.default_provider)
.await
}
/// Transcribe audio using a specific named provider.
pub async fn transcribe_with_provider(
&self,
audio_data: &[u8],
file_name: &str,
provider: &str,
) -> Result<String> {
let p = self.providers.get(provider).ok_or_else(|| {
let available: Vec<&str> = self.providers.keys().map(|k| k.as_str()).collect();
anyhow::anyhow!(
"Transcription provider '{provider}' not configured. Available: {available:?}"
)
})?;
p.transcribe(audio_data, file_name).await
}
/// List registered provider names.
pub fn available_providers(&self) -> Vec<&str> {
self.providers.keys().map(|k| k.as_str()).collect()
}
}
// ── Backward-compatible convenience function ────────────────────
/// Transcribe audio bytes via a Whisper-compatible transcription API.
///
/// Returns the transcribed text on success.
///
/// This is the backward-compatible entry point that preserves the original
/// function signature. It uses the Groq provider directly, matching the
/// original single-provider behavior.
///
/// Credential resolution order:
/// 1. `config.transcription.api_key`
/// 2. `GROQ_API_KEY` environment variable (backward compatibility)
///
/// The caller is responsible for enforcing duration limits *before* downloading
/// the file; this function enforces the byte-size cap.
pub async fn transcribe_audio(
audio_data: Vec<u8>,
file_name: &str,
config: &TranscriptionConfig,
) -> Result<String> {
// Validate audio before resolving credentials so that size/format errors
// are reported before missing-key errors (preserves original behavior).
validate_audio(&audio_data, file_name)?;
let groq = GroqProvider::from_config(config)?;
groq.transcribe(&audio_data, file_name).await
}
#[cfg(test)]
mod tests {
use super::*;
@@ -125,8 +751,10 @@ mod tests {
#[tokio::test]
async fn rejects_missing_api_key() {
// Ensure the key is absent for this test
// Ensure all candidate keys are absent for this test.
std::env::remove_var("GROQ_API_KEY");
std::env::remove_var("OPENAI_API_KEY");
std::env::remove_var("TRANSCRIPTION_API_KEY");
let data = vec![0u8; 100];
let config = TranscriptionConfig::default();
@@ -135,11 +763,29 @@ mod tests {
.await
.unwrap_err();
assert!(
err.to_string().contains("GROQ_API_KEY"),
err.to_string().contains("transcription API key"),
"expected missing-key error, got: {err}"
);
}
#[tokio::test]
async fn uses_config_api_key_without_groq_env() {
std::env::remove_var("GROQ_API_KEY");
let data = vec![0u8; 100];
let mut config = TranscriptionConfig::default();
config.api_key = Some("transcription-key".to_string());
// Keep invalid extension so we fail before network, but after key resolution.
let err = transcribe_audio(data, "recording.aac", &config)
.await
.unwrap_err();
assert!(
err.to_string().contains("Unsupported audio format"),
"expected unsupported-format error, got: {err}"
);
}
#[test]
fn mime_for_audio_maps_accepted_formats() {
let cases = [
@@ -215,4 +861,128 @@ mod tests {
"error should mention the rejected extension, got: {msg}"
);
}
// ── TranscriptionManager tests ──────────────────────────────
#[test]
fn manager_creation_with_default_config() {
std::env::remove_var("GROQ_API_KEY");
let config = TranscriptionConfig::default();
let manager = TranscriptionManager::new(&config).unwrap();
assert_eq!(manager.default_provider, "groq");
// Groq won't be registered without a key.
assert!(manager.providers.is_empty());
}
#[test]
fn manager_registers_groq_with_key() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
let manager = TranscriptionManager::new(&config).unwrap();
assert!(manager.providers.contains_key("groq"));
assert_eq!(manager.providers["groq"].name(), "groq");
}
#[test]
fn manager_registers_multiple_providers() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
config.openai = Some(crate::config::OpenAiSttConfig {
api_key: Some("test-openai-key".to_string()),
model: "whisper-1".to_string(),
});
config.deepgram = Some(crate::config::DeepgramSttConfig {
api_key: Some("test-deepgram-key".to_string()),
model: "nova-2".to_string(),
});
let manager = TranscriptionManager::new(&config).unwrap();
assert!(manager.providers.contains_key("groq"));
assert!(manager.providers.contains_key("openai"));
assert!(manager.providers.contains_key("deepgram"));
assert_eq!(manager.available_providers().len(), 3);
}
#[tokio::test]
async fn manager_rejects_unconfigured_provider() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.api_key = Some("test-groq-key".to_string());
let manager = TranscriptionManager::new(&config).unwrap();
let err = manager
.transcribe_with_provider(&[0u8; 100], "test.ogg", "nonexistent")
.await
.unwrap_err();
assert!(
err.to_string().contains("not configured"),
"expected not-configured error, got: {err}"
);
}
#[test]
fn manager_default_provider_from_config() {
std::env::remove_var("GROQ_API_KEY");
let mut config = TranscriptionConfig::default();
config.default_provider = "openai".to_string();
config.openai = Some(crate::config::OpenAiSttConfig {
api_key: Some("test-openai-key".to_string()),
model: "whisper-1".to_string(),
});
let manager = TranscriptionManager::new(&config).unwrap();
assert_eq!(manager.default_provider, "openai");
}
#[test]
fn validate_audio_rejects_oversized() {
let big = vec![0u8; MAX_AUDIO_BYTES + 1];
let err = validate_audio(&big, "test.ogg").unwrap_err();
assert!(err.to_string().contains("too large"));
}
#[test]
fn validate_audio_rejects_unsupported_format() {
let data = vec![0u8; 100];
let err = validate_audio(&data, "test.aac").unwrap_err();
assert!(err.to_string().contains("Unsupported audio format"));
}
#[test]
fn validate_audio_accepts_supported_format() {
let data = vec![0u8; 100];
let (name, mime) = validate_audio(&data, "test.ogg").unwrap();
assert_eq!(name, "test.ogg");
assert_eq!(mime, "audio/ogg");
}
#[test]
fn validate_audio_normalizes_oga() {
let data = vec![0u8; 100];
let (name, mime) = validate_audio(&data, "voice.oga").unwrap();
assert_eq!(name, "voice.ogg");
assert_eq!(mime, "audio/ogg");
}
#[test]
fn backward_compat_config_defaults_unchanged() {
let config = TranscriptionConfig::default();
assert!(!config.enabled);
assert!(config.api_key.is_none());
assert!(config.api_url.contains("groq.com"));
assert_eq!(config.model, "whisper-large-v3-turbo");
assert_eq!(config.default_provider, "groq");
assert!(config.openai.is_none());
assert!(config.deepgram.is_none());
assert!(config.assemblyai.is_none());
assert!(config.google.is_none());
}
}
+1
View File
@@ -85,6 +85,7 @@ impl TtsProvider for OpenAiTtsProvider {
"input": text,
"voice": voice,
"speed": self.speed,
"response_format": "opus",
});
let resp = self
+485
View File
@@ -0,0 +1,485 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use serde_json::json;
use std::collections::HashSet;
use std::sync::Arc;
use tokio::sync::RwLock;
use uuid::Uuid;
const TWITTER_API_BASE: &str = "https://api.x.com/2";
/// X/Twitter channel — uses the Twitter API v2 with OAuth 2.0 Bearer Token
/// for sending tweets/DMs and filtered stream for receiving mentions.
pub struct TwitterChannel {
bearer_token: String,
allowed_users: Vec<String>,
/// Message deduplication set.
dedup: Arc<RwLock<HashSet<String>>>,
}
/// Deduplication set capacity — evict half of entries when full.
const DEDUP_CAPACITY: usize = 10_000;
impl TwitterChannel {
pub fn new(bearer_token: String, allowed_users: Vec<String>) -> Self {
Self {
bearer_token,
allowed_users,
dedup: Arc::new(RwLock::new(HashSet::new())),
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.twitter")
}
fn is_user_allowed(&self, user_id: &str) -> bool {
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
}
/// Check and insert tweet ID for deduplication.
async fn is_duplicate(&self, tweet_id: &str) -> bool {
if tweet_id.is_empty() {
return false;
}
let mut dedup = self.dedup.write().await;
if dedup.contains(tweet_id) {
return true;
}
if dedup.len() >= DEDUP_CAPACITY {
let to_remove: Vec<String> = dedup.iter().take(DEDUP_CAPACITY / 2).cloned().collect();
for key in to_remove {
dedup.remove(&key);
}
}
dedup.insert(tweet_id.to_string());
false
}
/// Get the authenticated user's ID for filtered stream rules.
async fn get_authenticated_user_id(&self) -> anyhow::Result<String> {
let resp = self
.http_client()
.get(format!("{TWITTER_API_BASE}/users/me"))
.bearer_auth(&self.bearer_token)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let err = resp.text().await.unwrap_or_default();
anyhow::bail!("Twitter users/me failed ({status}): {err}");
}
let data: serde_json::Value = resp.json().await?;
let user_id = data
.get("data")
.and_then(|d| d.get("id"))
.and_then(|id| id.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing user id in Twitter response"))?
.to_string();
Ok(user_id)
}
/// Send a reply tweet.
async fn create_tweet(
&self,
text: &str,
reply_tweet_id: Option<&str>,
) -> anyhow::Result<String> {
let mut body = json!({ "text": text });
if let Some(reply_id) = reply_tweet_id {
body["reply"] = json!({ "in_reply_to_tweet_id": reply_id });
}
let resp = self
.http_client()
.post(format!("{TWITTER_API_BASE}/tweets"))
.bearer_auth(&self.bearer_token)
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let err = resp.text().await.unwrap_or_default();
anyhow::bail!("Twitter create tweet failed ({status}): {err}");
}
let data: serde_json::Value = resp.json().await?;
let tweet_id = data
.get("data")
.and_then(|d| d.get("id"))
.and_then(|id| id.as_str())
.unwrap_or("")
.to_string();
Ok(tweet_id)
}
/// Send a DM to a user.
async fn send_dm(&self, recipient_id: &str, text: &str) -> anyhow::Result<()> {
let body = json!({
"text": text,
});
let resp = self
.http_client()
.post(format!(
"{TWITTER_API_BASE}/dm_conversations/with/{recipient_id}/messages"
))
.bearer_auth(&self.bearer_token)
.json(&body)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status();
let err = resp.text().await.unwrap_or_default();
anyhow::bail!("Twitter DM send failed ({status}): {err}");
}
Ok(())
}
}
#[async_trait]
impl Channel for TwitterChannel {
fn name(&self) -> &str {
"twitter"
}
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
// recipient format: "dm:{user_id}" for DMs, "tweet:{tweet_id}" for replies
if let Some(user_id) = message.recipient.strip_prefix("dm:") {
// Twitter API enforces a 280 char limit on tweets but DMs can be up to 10000.
self.send_dm(user_id, &message.content).await
} else if let Some(tweet_id) = message.recipient.strip_prefix("tweet:") {
// Split long replies into tweet threads (280 char limit).
let chunks = split_tweet_text(&message.content, 280);
let mut reply_to = tweet_id.to_string();
for chunk in chunks {
reply_to = self.create_tweet(&chunk, Some(&reply_to)).await?;
}
Ok(())
} else {
// Default: treat as tweet reply
let chunks = split_tweet_text(&message.content, 280);
let mut reply_to = message.recipient.clone();
for chunk in chunks {
reply_to = self.create_tweet(&chunk, Some(&reply_to)).await?;
}
Ok(())
}
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
tracing::info!("Twitter: authenticating...");
let bot_user_id = self.get_authenticated_user_id().await?;
tracing::info!("Twitter: authenticated as user {bot_user_id}");
// Poll mentions timeline (filtered stream requires elevated access).
// Using mentions timeline polling as a more accessible approach.
let mut since_id: Option<String> = None;
let poll_interval = std::time::Duration::from_secs(15);
loop {
let mut url = format!(
"{TWITTER_API_BASE}/users/{bot_user_id}/mentions?tweet.fields=author_id,conversation_id,created_at&expansions=author_id&max_results=20"
);
if let Some(ref id) = since_id {
use std::fmt::Write;
let _ = write!(url, "&since_id={id}");
}
match self
.http_client()
.get(&url)
.bearer_auth(&self.bearer_token)
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
let data: serde_json::Value = match resp.json().await {
Ok(d) => d,
Err(e) => {
tracing::warn!("Twitter: failed to parse mentions response: {e}");
tokio::time::sleep(poll_interval).await;
continue;
}
};
if let Some(tweets) = data.get("data").and_then(|d| d.as_array()) {
// Build user lookup map from includes
let user_map: std::collections::HashMap<String, String> = data
.get("includes")
.and_then(|i| i.get("users"))
.and_then(|u| u.as_array())
.map(|users| {
users
.iter()
.filter_map(|u| {
let id = u.get("id")?.as_str()?.to_string();
let username = u.get("username")?.as_str()?.to_string();
Some((id, username))
})
.collect()
})
.unwrap_or_default();
// Process tweets in chronological order (oldest first)
for tweet in tweets.iter().rev() {
let tweet_id = tweet.get("id").and_then(|i| i.as_str()).unwrap_or("");
let author_id = tweet
.get("author_id")
.and_then(|a| a.as_str())
.unwrap_or("");
let text = tweet.get("text").and_then(|t| t.as_str()).unwrap_or("");
// Skip own tweets
if author_id == bot_user_id {
continue;
}
if self.is_duplicate(tweet_id).await {
continue;
}
let username = user_map
.get(author_id)
.cloned()
.unwrap_or_else(|| author_id.to_string());
if !self.is_user_allowed(&username) && !self.is_user_allowed(author_id)
{
tracing::debug!(
"Twitter: ignoring mention from unauthorized user: {username}"
);
continue;
}
// Strip the @mention from the text
let clean_text = strip_at_mention(text, &bot_user_id);
if clean_text.trim().is_empty() {
continue;
}
let reply_target = format!("tweet:{tweet_id}");
let channel_msg = ChannelMessage {
id: Uuid::new_v4().to_string(),
sender: username,
reply_target,
content: clean_text,
channel: "twitter".to_string(),
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: tweet
.get("conversation_id")
.and_then(|c| c.as_str())
.map(|s| s.to_string()),
};
if tx.send(channel_msg).await.is_err() {
tracing::warn!("Twitter: message channel closed");
return Ok(());
}
// Track newest ID for pagination
if since_id.as_deref().map_or(true, |s| tweet_id > s) {
since_id = Some(tweet_id.to_string());
}
}
}
// Update newest_id from meta
if let Some(newest) = data
.get("meta")
.and_then(|m| m.get("newest_id"))
.and_then(|n| n.as_str())
{
since_id = Some(newest.to_string());
}
}
Ok(resp) => {
let status = resp.status();
if status.as_u16() == 429 {
// Rate limited — back off
tracing::warn!("Twitter: rate limited, backing off 60s");
tokio::time::sleep(std::time::Duration::from_secs(60)).await;
continue;
}
let err = resp.text().await.unwrap_or_default();
tracing::warn!("Twitter: mentions request failed ({status}): {err}");
}
Err(e) => {
tracing::warn!("Twitter: mentions request error: {e}");
}
}
tokio::time::sleep(poll_interval).await;
}
}
async fn health_check(&self) -> bool {
self.get_authenticated_user_id().await.is_ok()
}
}
/// Strip @mention from the beginning of a tweet text.
fn strip_at_mention(text: &str, _bot_user_id: &str) -> String {
// Remove all leading @mentions (Twitter includes @bot_name at start of replies)
let mut result = text;
while let Some(rest) = result.strip_prefix('@') {
// Skip past the username (until whitespace or end)
match rest.find(char::is_whitespace) {
Some(idx) => result = rest[idx..].trim_start(),
None => return String::new(),
}
}
result.to_string()
}
/// Split text into tweet-sized chunks, breaking at word boundaries.
fn split_tweet_text(text: &str, max_len: usize) -> Vec<String> {
if text.len() <= max_len {
return vec![text.to_string()];
}
let mut chunks = Vec::new();
let mut remaining = text;
while !remaining.is_empty() {
if remaining.len() <= max_len {
chunks.push(remaining.to_string());
break;
}
// Find last space within limit
let split_at = remaining[..max_len].rfind(' ').unwrap_or(max_len);
chunks.push(remaining[..split_at].to_string());
remaining = remaining[split_at..].trim_start();
}
chunks
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_name() {
let ch = TwitterChannel::new("token".into(), vec![]);
assert_eq!(ch.name(), "twitter");
}
#[test]
fn test_user_allowed_wildcard() {
let ch = TwitterChannel::new("token".into(), vec!["*".into()]);
assert!(ch.is_user_allowed("anyone"));
}
#[test]
fn test_user_allowed_specific() {
let ch = TwitterChannel::new("token".into(), vec!["user123".into()]);
assert!(ch.is_user_allowed("user123"));
assert!(!ch.is_user_allowed("other"));
}
#[test]
fn test_user_denied_empty() {
let ch = TwitterChannel::new("token".into(), vec![]);
assert!(!ch.is_user_allowed("anyone"));
}
#[tokio::test]
async fn test_dedup() {
let ch = TwitterChannel::new("token".into(), vec![]);
assert!(!ch.is_duplicate("tweet1").await);
assert!(ch.is_duplicate("tweet1").await);
assert!(!ch.is_duplicate("tweet2").await);
}
#[tokio::test]
async fn test_dedup_empty_id() {
let ch = TwitterChannel::new("token".into(), vec![]);
assert!(!ch.is_duplicate("").await);
assert!(!ch.is_duplicate("").await);
}
#[test]
fn test_strip_at_mention_single() {
assert_eq!(strip_at_mention("@bot hello world", "123"), "hello world");
}
#[test]
fn test_strip_at_mention_multiple() {
assert_eq!(strip_at_mention("@bot @other hello", "123"), "hello");
}
#[test]
fn test_strip_at_mention_only() {
assert_eq!(strip_at_mention("@bot", "123"), "");
}
#[test]
fn test_strip_at_mention_no_mention() {
assert_eq!(strip_at_mention("hello world", "123"), "hello world");
}
#[test]
fn test_split_tweet_text_short() {
let chunks = split_tweet_text("hello", 280);
assert_eq!(chunks, vec!["hello"]);
}
#[test]
fn test_split_tweet_text_long() {
let text = "a ".repeat(200);
let chunks = split_tweet_text(text.trim(), 280);
assert!(chunks.len() > 1);
for chunk in &chunks {
assert!(chunk.len() <= 280);
}
}
#[test]
fn test_split_tweet_text_no_spaces() {
let text = "a".repeat(300);
let chunks = split_tweet_text(&text, 280);
assert_eq!(chunks.len(), 2);
assert_eq!(chunks[0].len(), 280);
}
#[test]
fn test_config_serde() {
let toml_str = r#"
bearer_token = "AAAA"
allowed_users = ["user1"]
"#;
let config: crate::config::schema::TwitterConfig = toml::from_str(toml_str).unwrap();
assert_eq!(config.bearer_token, "AAAA");
assert_eq!(config.allowed_users, vec!["user1"]);
}
#[test]
fn test_config_serde_defaults() {
let toml_str = r#"
bearer_token = "tok"
"#;
let config: crate::config::schema::TwitterConfig = toml::from_str(toml_str).unwrap();
assert!(config.allowed_users.is_empty());
}
}
+409
View File
@@ -0,0 +1,409 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use anyhow::{bail, Result};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
/// Generic Webhook channel — receives messages via HTTP POST and sends replies
/// to a configurable outbound URL. This is the "universal adapter" for any system
/// that supports webhooks.
pub struct WebhookChannel {
listen_port: u16,
listen_path: String,
send_url: Option<String>,
send_method: String,
auth_header: Option<String>,
secret: Option<String>,
}
/// Incoming webhook payload format.
#[derive(Debug, Deserialize)]
struct IncomingWebhook {
sender: String,
content: String,
#[serde(default)]
thread_id: Option<String>,
}
/// Outgoing webhook payload format.
#[derive(Debug, Serialize)]
struct OutgoingWebhook {
content: String,
#[serde(skip_serializing_if = "Option::is_none")]
thread_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
recipient: Option<String>,
}
impl WebhookChannel {
pub fn new(
listen_port: u16,
listen_path: Option<String>,
send_url: Option<String>,
send_method: Option<String>,
auth_header: Option<String>,
secret: Option<String>,
) -> Self {
let path = listen_path.unwrap_or_else(|| "/webhook".to_string());
// Ensure path starts with /
let listen_path = if path.starts_with('/') {
path
} else {
format!("/{path}")
};
Self {
listen_port,
listen_path,
send_url,
send_method: send_method
.unwrap_or_else(|| "POST".to_string())
.to_uppercase(),
auth_header,
secret,
}
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.webhook")
}
/// Verify an incoming request's signature if a secret is configured.
fn verify_signature(&self, body: &[u8], signature: Option<&str>) -> bool {
let Some(ref secret) = self.secret else {
return true; // No secret configured, accept all
};
let Some(sig) = signature else {
return false; // Secret is set but no signature header provided
};
// HMAC-SHA256 verification
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let Ok(mut mac) = HmacSha256::new_from_slice(secret.as_bytes()) else {
return false;
};
mac.update(body);
// Signature should be hex-encoded
let Ok(expected) = hex::decode(sig.trim_start_matches("sha256=")) else {
return false;
};
mac.verify_slice(&expected).is_ok()
}
}
#[async_trait]
impl Channel for WebhookChannel {
fn name(&self) -> &str {
"webhook"
}
async fn send(&self, message: &SendMessage) -> Result<()> {
let Some(ref send_url) = self.send_url else {
tracing::debug!("Webhook channel: no send_url configured, skipping outbound message");
return Ok(());
};
let client = self.http_client();
let payload = OutgoingWebhook {
content: message.content.clone(),
thread_id: message.thread_ts.clone(),
recipient: if message.recipient.is_empty() {
None
} else {
Some(message.recipient.clone())
},
};
let mut request = match self.send_method.as_str() {
"PUT" => client.put(send_url),
_ => client.post(send_url),
};
if let Some(ref auth) = self.auth_header {
request = request.header("Authorization", auth);
}
let resp = request.json(&payload).send().await?;
let status = resp.status();
if !status.is_success() {
let body = resp
.text()
.await
.unwrap_or_else(|e| format!("<failed to read response: {e}>"));
bail!("Webhook send failed ({status}): {body}");
}
Ok(())
}
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> Result<()> {
use axum::{
body::Bytes,
extract::State,
http::{HeaderMap, StatusCode},
routing::post,
Router,
};
use portable_atomic::{AtomicU64, Ordering};
use std::sync::Arc;
let counter = Arc::new(AtomicU64::new(0));
struct WebhookState {
tx: tokio::sync::mpsc::Sender<ChannelMessage>,
secret: Option<String>,
counter: Arc<AtomicU64>,
}
let state = Arc::new(WebhookState {
tx: tx.clone(),
secret: self.secret.clone(),
counter: counter.clone(),
});
let listen_path = self.listen_path.clone();
async fn handle_webhook(
State(state): State<Arc<WebhookState>>,
headers: HeaderMap,
body: Bytes,
) -> StatusCode {
// Verify signature if secret is configured
if let Some(ref secret) = state.secret {
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let signature = headers
.get("x-webhook-signature")
.and_then(|v| v.to_str().ok());
let valid = if let Some(sig) = signature {
if let Ok(mut mac) = HmacSha256::new_from_slice(secret.as_bytes()) {
mac.update(&body);
let expected =
hex::decode(sig.trim_start_matches("sha256=")).unwrap_or_default();
mac.verify_slice(&expected).is_ok()
} else {
false
}
} else {
false
};
if !valid {
tracing::warn!("Webhook: invalid signature, rejecting request");
return StatusCode::UNAUTHORIZED;
}
}
let payload: IncomingWebhook = match serde_json::from_slice(&body) {
Ok(p) => p,
Err(e) => {
tracing::warn!("Webhook: invalid JSON payload: {e}");
return StatusCode::BAD_REQUEST;
}
};
if payload.content.is_empty() {
return StatusCode::BAD_REQUEST;
}
let seq = state.counter.fetch_add(1, Ordering::Relaxed);
#[allow(clippy::cast_possible_truncation)]
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let reply_target = payload
.thread_id
.clone()
.unwrap_or_else(|| payload.sender.clone());
let msg = ChannelMessage {
id: format!("webhook_{seq}"),
sender: payload.sender,
reply_target,
content: payload.content,
channel: "webhook".to_string(),
timestamp,
thread_ts: payload.thread_id,
};
if state.tx.send(msg).await.is_err() {
return StatusCode::SERVICE_UNAVAILABLE;
}
StatusCode::OK
}
let app = Router::new()
.route(&listen_path, post(handle_webhook))
.with_state(state);
let addr = std::net::SocketAddr::from(([0, 0, 0, 0], self.listen_port));
tracing::info!(
"Webhook channel listening on http://0.0.0.0:{}{} ...",
self.listen_port,
self.listen_path
);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app)
.await
.map_err(|e| anyhow::anyhow!("Webhook server error: {e}"))?;
Ok(())
}
async fn health_check(&self) -> bool {
// Webhook channel is healthy if the port can be bound (basic check).
// In practice, once listen() starts the server is running.
true
}
}
#[cfg(test)]
mod tests {
use super::*;
fn make_channel() -> WebhookChannel {
WebhookChannel::new(
8080,
Some("/webhook".into()),
Some("https://example.com/callback".into()),
None,
None,
None,
)
}
fn make_channel_with_secret() -> WebhookChannel {
WebhookChannel::new(
8080,
None,
Some("https://example.com/callback".into()),
None,
None,
Some("mysecret".into()),
)
}
#[test]
fn default_path() {
let ch = WebhookChannel::new(8080, None, None, None, None, None);
assert_eq!(ch.listen_path, "/webhook");
}
#[test]
fn path_normalized() {
let ch = WebhookChannel::new(8080, Some("hooks/incoming".into()), None, None, None, None);
assert_eq!(ch.listen_path, "/hooks/incoming");
}
#[test]
fn send_method_default() {
let ch = make_channel();
assert_eq!(ch.send_method, "POST");
}
#[test]
fn send_method_put() {
let ch = WebhookChannel::new(
8080,
None,
Some("https://example.com".into()),
Some("put".into()),
None,
None,
);
assert_eq!(ch.send_method, "PUT");
}
#[test]
fn incoming_payload_deserializes_all_fields() {
let json = r#"{"sender": "zeroclaw_user", "content": "hello", "thread_id": "t1"}"#;
let payload: IncomingWebhook = serde_json::from_str(json).unwrap();
assert_eq!(payload.sender, "zeroclaw_user");
assert_eq!(payload.content, "hello");
assert_eq!(payload.thread_id.as_deref(), Some("t1"));
}
#[test]
fn incoming_payload_without_thread() {
let json = r#"{"sender": "bob", "content": "hi"}"#;
let payload: IncomingWebhook = serde_json::from_str(json).unwrap();
assert_eq!(payload.sender, "bob");
assert_eq!(payload.content, "hi");
assert!(payload.thread_id.is_none());
}
#[test]
fn outgoing_payload_serializes_content() {
let payload = OutgoingWebhook {
content: "response".into(),
thread_id: Some("t1".into()),
recipient: Some("zeroclaw_user".into()),
};
let json = serde_json::to_value(&payload).unwrap();
assert_eq!(json["content"], "response");
assert_eq!(json["thread_id"], "t1");
assert_eq!(json["recipient"], "zeroclaw_user");
}
#[test]
fn outgoing_payload_omits_none_fields() {
let payload = OutgoingWebhook {
content: "response".into(),
thread_id: None,
recipient: None,
};
let json = serde_json::to_value(&payload).unwrap();
assert_eq!(json["content"], "response");
assert!(json.get("thread_id").is_none());
assert!(json.get("recipient").is_none());
}
#[test]
fn verify_signature_no_secret() {
let ch = make_channel();
assert!(ch.verify_signature(b"body", None));
}
#[test]
fn verify_signature_missing_header() {
let ch = make_channel_with_secret();
assert!(!ch.verify_signature(b"body", None));
}
#[test]
fn verify_signature_valid() {
use hmac::{Hmac, Mac};
use sha2::Sha256;
type HmacSha256 = Hmac<Sha256>;
let ch = make_channel_with_secret();
let body = b"test body";
let mut mac = HmacSha256::new_from_slice(b"mysecret").unwrap();
mac.update(body);
let sig = hex::encode(mac.finalize().into_bytes());
assert!(ch.verify_signature(body, Some(&sig)));
}
#[test]
fn verify_signature_invalid() {
let ch = make_channel_with_secret();
assert!(!ch.verify_signature(b"body", Some("badhex")));
}
}
+350 -38
View File
@@ -64,6 +64,17 @@ pub struct WhatsAppWebChannel {
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
/// Voice transcription (STT) config
transcription: Option<crate::config::TranscriptionConfig>,
/// Text-to-speech config for voice replies
tts_config: Option<crate::config::TtsConfig>,
/// Chats awaiting a voice reply — maps chat JID to the latest substantive
/// reply text. A background task debounces and sends the voice note after
/// the agent finishes its turn (no new send() for 3 seconds).
pending_voice:
Arc<std::sync::Mutex<std::collections::HashMap<String, (String, std::time::Instant)>>>,
/// Chats whose last incoming message was a voice note.
voice_chats: Arc<std::sync::Mutex<std::collections::HashSet<String>>>,
}
impl WhatsAppWebChannel {
@@ -90,9 +101,31 @@ impl WhatsAppWebChannel {
bot_handle: Arc::new(Mutex::new(None)),
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
transcription: None,
tts_config: None,
pending_voice: Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
voice_chats: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
}
}
/// Configure voice transcription (STT) for incoming voice notes.
#[cfg(feature = "whatsapp-web")]
pub fn with_transcription(mut self, config: crate::config::TranscriptionConfig) -> Self {
if config.enabled {
self.transcription = Some(config);
}
self
}
/// Configure text-to-speech for outgoing voice replies.
#[cfg(feature = "whatsapp-web")]
pub fn with_tts(mut self, config: crate::config::TtsConfig) -> Self {
if config.enabled {
self.tts_config = Some(config);
}
self
}
/// Check if a phone number is allowed (E.164 format: +1234567890)
#[cfg(feature = "whatsapp-web")]
fn is_number_allowed(&self, phone: &str) -> bool {
@@ -275,6 +308,134 @@ impl WhatsAppWebChannel {
format!("{expanded_session_path}-shm"),
]
}
/// Attempt to download and transcribe a WhatsApp voice note.
///
/// Returns `None` if transcription is disabled, download fails, or
/// transcription fails (all logged as warnings).
#[cfg(feature = "whatsapp-web")]
async fn try_transcribe_voice_note(
client: &wa_rs::Client,
audio: &wa_rs_proto::whatsapp::message::AudioMessage,
transcription_config: Option<&crate::config::TranscriptionConfig>,
) -> Option<String> {
let config = transcription_config?;
// Enforce duration limit
if let Some(seconds) = audio.seconds {
if u64::from(seconds) > config.max_duration_secs {
tracing::info!(
"WhatsApp Web: skipping voice note ({}s exceeds {}s limit)",
seconds,
config.max_duration_secs
);
return None;
}
}
// Download the encrypted audio
use wa_rs::download::Downloadable;
let audio_data = match client.download(audio as &dyn Downloadable).await {
Ok(data) => data,
Err(e) => {
tracing::warn!("WhatsApp Web: failed to download voice note: {e}");
return None;
}
};
// Determine filename from mimetype for transcription API
let file_name = match audio.mimetype.as_deref() {
Some(m) if m.contains("opus") || m.contains("ogg") => "voice.ogg",
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
Some(m) if m.contains("mpeg") || m.contains("mp3") => "voice.mp3",
Some(m) if m.contains("webm") => "voice.webm",
_ => "voice.ogg", // WhatsApp default
};
tracing::info!(
"WhatsApp Web: transcribing voice note ({} bytes, file={})",
audio_data.len(),
file_name
);
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
Ok(text) if text.trim().is_empty() => {
tracing::info!("WhatsApp Web: voice transcription returned empty text, skipping");
None
}
Ok(text) => {
tracing::info!(
"WhatsApp Web: voice note transcribed ({} chars)",
text.len()
);
Some(text)
}
Err(e) => {
tracing::warn!("WhatsApp Web: voice transcription failed: {e}");
None
}
}
}
/// Synthesize text to speech and send as a WhatsApp voice note (static version for spawned tasks).
#[cfg(feature = "whatsapp-web")]
async fn synthesize_voice_static(
client: &wa_rs::Client,
to: &wa_rs_binary::jid::Jid,
text: &str,
tts_config: &crate::config::TtsConfig,
) -> Result<()> {
let tts_manager = super::tts::TtsManager::new(tts_config)?;
let audio_bytes = tts_manager.synthesize(text).await?;
let audio_len = audio_bytes.len();
tracing::info!("WhatsApp Web TTS: synthesized {} bytes of audio", audio_len);
if audio_bytes.is_empty() {
anyhow::bail!("TTS returned empty audio");
}
use wa_rs_core::download::MediaType;
let upload = client
.upload(audio_bytes, MediaType::Audio)
.await
.map_err(|e| anyhow!("Failed to upload TTS audio: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: uploaded audio (url_len={}, file_length={})",
upload.url.len(),
upload.file_length
);
// Estimate duration: Opus at ~32kbps → bytes / 4000 ≈ seconds
#[allow(clippy::cast_possible_truncation)]
let estimated_seconds = std::cmp::max(1, (upload.file_length / 4000) as u32);
let voice_msg = wa_rs_proto::whatsapp::Message {
audio_message: Some(Box::new(wa_rs_proto::whatsapp::message::AudioMessage {
url: Some(upload.url),
direct_path: Some(upload.direct_path),
media_key: Some(upload.media_key),
file_enc_sha256: Some(upload.file_enc_sha256),
file_sha256: Some(upload.file_sha256),
file_length: Some(upload.file_length),
mimetype: Some("audio/ogg; codecs=opus".to_string()),
ptt: Some(true),
seconds: Some(estimated_seconds),
..Default::default()
})),
..Default::default()
};
Box::pin(client.send_message(to.clone(), voice_msg))
.await
.map_err(|e| anyhow!("Failed to send voice note: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: sent voice note ({} bytes, ~{}s)",
audio_len,
estimated_seconds
);
Ok(())
}
}
#[cfg(feature = "whatsapp-web")]
@@ -303,6 +464,88 @@ impl Channel for WhatsAppWebChannel {
}
let to = self.recipient_to_jid(&message.recipient)?;
// Voice chat mode: send text normally AND queue a voice note of the
// final answer. Only substantive messages (not tool outputs) are queued.
// A debounce task waits 10s after the last substantive message, then
// sends ONE voice note. Text in → text out. Voice in → text + voice out.
let is_voice_chat = self
.voice_chats
.lock()
.map(|vs| vs.contains(&message.recipient))
.unwrap_or(false);
if is_voice_chat && self.tts_config.is_some() {
let content = &message.content;
// Only queue substantive natural-language replies for voice.
// Skip tool outputs: URLs, JSON, code blocks, errors, short status.
let is_substantive = content.len() > 40
&& !content.starts_with("http")
&& !content.starts_with('{')
&& !content.starts_with('[')
&& !content.starts_with("Error")
&& !content.contains("```")
&& !content.contains("tool_call")
&& !content.contains("wttr.in");
if is_substantive {
if let Ok(mut pv) = self.pending_voice.lock() {
pv.insert(
message.recipient.clone(),
(content.clone(), std::time::Instant::now()),
);
}
let pending = self.pending_voice.clone();
let voice_chats = self.voice_chats.clone();
let client_clone = client.clone();
let to_clone = to.clone();
let recipient = message.recipient.clone();
let tts_config = self.tts_config.clone().unwrap();
tokio::spawn(async move {
// Wait 10 seconds — long enough for the agent to finish its
// full tool chain and send the final answer.
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
// Atomic check-and-remove: only one task gets the value
let to_voice = pending.lock().ok().and_then(|mut pv| {
if let Some((_, ts)) = pv.get(&recipient) {
if ts.elapsed().as_secs() >= 8 {
return pv.remove(&recipient).map(|(text, _)| text);
}
}
None
});
if let Some(text) = to_voice {
if let Ok(mut vc) = voice_chats.lock() {
vc.remove(&recipient);
}
match Box::pin(WhatsAppWebChannel::synthesize_voice_static(
&client_clone,
&to_clone,
&text,
&tts_config,
))
.await
{
Ok(()) => {
tracing::info!(
"WhatsApp Web: voice reply sent ({} chars)",
text.len()
);
}
Err(e) => {
tracing::warn!("WhatsApp Web: TTS voice reply failed: {e}");
}
}
}
});
}
// Fall through to send text normally (voice chat gets BOTH)
}
// Send text message
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
@@ -310,7 +553,7 @@ impl Channel for WhatsAppWebChannel {
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
"WhatsApp Web: sent text to {} (id: {})",
message.recipient,
message_id
);
@@ -380,40 +623,33 @@ impl Channel for WhatsAppWebChannel {
let logout_tx_clone = logout_tx.clone();
let retry_count_clone = retry_count.clone();
let session_revoked_clone = session_revoked.clone();
let transcription_config = self.transcription.clone();
let transcription_config = self.transcription.clone();
let voice_chats = self.voice_chats.clone();
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
.with_http_client(http_client)
.on_event(move |event, _client| {
.on_event(move |event, client| {
let tx_inner = tx_clone.clone();
let allowed_numbers = allowed_numbers.clone();
let logout_tx = logout_tx_clone.clone();
let retry_count = retry_count_clone.clone();
let session_revoked = session_revoked_clone.clone();
let transcription_config = transcription_config.clone();
let voice_chats = voice_chats.clone();
async move {
match event {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender_jid = info.source.sender.clone();
let sender_alt = info.source.sender_alt.clone();
let sender = sender_jid.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, text_len={})",
sender.len(),
chat.len(),
text.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
text
);
let mapped_phone = if sender_jid.is_lid() {
_client.get_phone_number_from_lid(&sender_jid.user).await
client.get_phone_number_from_lid(&sender_jid.user).await
} else {
None
};
@@ -423,42 +659,92 @@ impl Channel for WhatsAppWebChannel {
mapped_phone.as_deref(),
);
if let Some(normalized) = sender_candidates
let normalized = match sender_candidates
.iter()
.find(|candidate| {
Self::is_number_allowed_for_list(&allowed_numbers, candidate)
})
.cloned()
{
let trimmed = text.trim();
if trimmed.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
Some(n) => n,
None => {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
);
return;
}
};
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content: trimmed.to_string(),
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
// Attempt voice note transcription (ptt = push-to-talk = voice note)
let voice_text = if let Some(ref audio) = msg.audio_message {
if audio.ptt == Some(true) {
Self::try_transcribe_voice_note(
&client,
audio,
transcription_config.as_ref(),
)
.await
{
tracing::error!("Failed to send message to channel: {}", e);
} else {
tracing::debug!(
"WhatsApp Web: ignoring non-PTT audio message from {}",
normalized
);
None
}
} else {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
None
};
// Use transcribed voice text, or fall back to text content.
// Track whether this chat used a voice note so we reply in kind.
// We store the chat JID (reply_target) since that's what send() receives.
let content = if let Some(ref vt) = voice_text {
if let Ok(mut vs) = voice_chats.lock() {
vs.insert(chat.clone());
}
format!("[Voice] {vt}")
} else {
if let Ok(mut vs) = voice_chats.lock() {
vs.remove(&chat);
}
let text = msg.text_content().unwrap_or("");
text.trim().to_string()
};
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, content_len={})",
sender.len(),
chat.len(),
content.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
content
);
if content.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
);
return;
}
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content,
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
}
Event::Connected(_) => {
@@ -695,6 +981,14 @@ impl WhatsAppWebChannel {
) -> Self {
Self { _private: () }
}
pub fn with_transcription(self, _config: crate::config::TranscriptionConfig) -> Self {
self
}
pub fn with_tts(self, _config: crate::config::TtsConfig) -> Self {
self
}
}
#[cfg(not(feature = "whatsapp-web"))]
@@ -936,6 +1230,24 @@ mod tests {
assert!(WhatsAppWebChannel::should_purge_session(&flag));
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn with_transcription_sets_config_when_enabled() {
let mut tc = crate::config::TranscriptionConfig::default();
tc.enabled = true;
let ch = make_channel().with_transcription(tc);
assert!(ch.transcription.is_some());
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn with_transcription_ignores_when_disabled() {
let tc = crate::config::TranscriptionConfig::default(); // enabled = false
let ch = make_channel().with_transcription(tc);
assert!(ch.transcription.is_none());
}
#[test]
#[cfg(feature = "whatsapp-web")]
fn session_file_paths_includes_wal_and_shm() {
+2
View File
@@ -0,0 +1,2 @@
pub mod self_test;
pub mod update;
+281
View File
@@ -0,0 +1,281 @@
//! `zeroclaw self-test` — quick and full diagnostic checks.
use anyhow::Result;
use std::path::Path;
/// Result of a single diagnostic check.
pub struct CheckResult {
pub name: &'static str,
pub passed: bool,
pub detail: String,
}
impl CheckResult {
fn pass(name: &'static str, detail: impl Into<String>) -> Self {
Self {
name,
passed: true,
detail: detail.into(),
}
}
fn fail(name: &'static str, detail: impl Into<String>) -> Self {
Self {
name,
passed: false,
detail: detail.into(),
}
}
}
/// Run the quick self-test suite (no network required).
pub async fn run_quick(config: &crate::config::Config) -> Result<Vec<CheckResult>> {
let mut results = Vec::new();
// 1. Config file exists and parses
results.push(check_config(config));
// 2. Workspace directory is writable
results.push(check_workspace(&config.workspace_dir).await);
// 3. SQLite memory backend opens
results.push(check_sqlite(&config.workspace_dir));
// 4. Provider registry has entries
results.push(check_provider_registry());
// 5. Tool registry has entries
results.push(check_tool_registry(config));
// 6. Channel registry loads
results.push(check_channel_config(config));
// 7. Security policy parses
results.push(check_security_policy(config));
// 8. Version sanity
results.push(check_version());
Ok(results)
}
/// Run the full self-test suite (includes network checks).
pub async fn run_full(config: &crate::config::Config) -> Result<Vec<CheckResult>> {
let mut results = run_quick(config).await?;
// 9. Gateway health endpoint
results.push(check_gateway_health(config).await);
// 10. Memory write/read round-trip
results.push(check_memory_roundtrip(config).await);
// 11. WebSocket handshake
results.push(check_websocket_handshake(config).await);
Ok(results)
}
/// Print results in a formatted table.
pub fn print_results(results: &[CheckResult]) {
let total = results.len();
let passed = results.iter().filter(|r| r.passed).count();
let failed = total - passed;
println!();
for (i, r) in results.iter().enumerate() {
let icon = if r.passed {
"\x1b[32m✓\x1b[0m"
} else {
"\x1b[31m✗\x1b[0m"
};
println!(" {} {}/{} {}{}", icon, i + 1, total, r.name, r.detail);
}
println!();
if failed == 0 {
println!(" \x1b[32mAll {total} checks passed.\x1b[0m");
} else {
println!(" \x1b[31m{failed}/{total} checks failed.\x1b[0m");
}
println!();
}
fn check_config(config: &crate::config::Config) -> CheckResult {
if config.config_path.exists() {
CheckResult::pass(
"config",
format!("loaded from {}", config.config_path.display()),
)
} else {
CheckResult::fail("config", "config file not found (using defaults)")
}
}
async fn check_workspace(workspace_dir: &Path) -> CheckResult {
match tokio::fs::metadata(workspace_dir).await {
Ok(meta) if meta.is_dir() => {
// Try writing a temp file
let test_file = workspace_dir.join(".selftest_probe");
match tokio::fs::write(&test_file, b"ok").await {
Ok(()) => {
let _ = tokio::fs::remove_file(&test_file).await;
CheckResult::pass(
"workspace",
format!("{} (writable)", workspace_dir.display()),
)
}
Err(e) => CheckResult::fail(
"workspace",
format!("{} (not writable: {e})", workspace_dir.display()),
),
}
}
Ok(_) => CheckResult::fail(
"workspace",
format!("{} exists but is not a directory", workspace_dir.display()),
),
Err(e) => CheckResult::fail(
"workspace",
format!("{} (error: {e})", workspace_dir.display()),
),
}
}
fn check_sqlite(workspace_dir: &Path) -> CheckResult {
let db_path = workspace_dir.join("memory.db");
match rusqlite::Connection::open(&db_path) {
Ok(conn) => match conn.execute_batch("SELECT 1") {
Ok(()) => CheckResult::pass("sqlite", "memory.db opens and responds"),
Err(e) => CheckResult::fail("sqlite", format!("query failed: {e}")),
},
Err(e) => CheckResult::fail("sqlite", format!("cannot open memory.db: {e}")),
}
}
fn check_provider_registry() -> CheckResult {
let providers = crate::providers::list_providers();
if providers.is_empty() {
CheckResult::fail("providers", "no providers registered")
} else {
CheckResult::pass(
"providers",
format!("{} providers available", providers.len()),
)
}
}
fn check_tool_registry(config: &crate::config::Config) -> CheckResult {
let security = std::sync::Arc::new(crate::security::SecurityPolicy::from_config(
&config.autonomy,
&config.workspace_dir,
));
let tools = crate::tools::default_tools(security);
if tools.is_empty() {
CheckResult::fail("tools", "no tools registered")
} else {
CheckResult::pass("tools", format!("{} core tools available", tools.len()))
}
}
fn check_channel_config(config: &crate::config::Config) -> CheckResult {
let channels = config.channels_config.channels();
let configured = channels.iter().filter(|(_, c)| *c).count();
CheckResult::pass(
"channels",
format!(
"{} channel types, {} configured",
channels.len(),
configured
),
)
}
fn check_security_policy(config: &crate::config::Config) -> CheckResult {
let _policy =
crate::security::SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
CheckResult::pass(
"security",
format!("autonomy level: {:?}", config.autonomy.level),
)
}
fn check_version() -> CheckResult {
let version = env!("CARGO_PKG_VERSION");
CheckResult::pass("version", format!("v{version}"))
}
async fn check_gateway_health(config: &crate::config::Config) -> CheckResult {
let port = config.gateway.port;
let host = if config.gateway.host == "[::]" || config.gateway.host == "0.0.0.0" {
"127.0.0.1"
} else {
&config.gateway.host
};
let url = format!("http://{host}:{port}/health");
match reqwest::Client::new()
.get(&url)
.timeout(std::time::Duration::from_secs(5))
.send()
.await
{
Ok(resp) if resp.status().is_success() => {
CheckResult::pass("gateway", format!("health OK at {url}"))
}
Ok(resp) => CheckResult::fail("gateway", format!("health returned {}", resp.status())),
Err(e) => CheckResult::fail("gateway", format!("not reachable at {url}: {e}")),
}
}
async fn check_memory_roundtrip(config: &crate::config::Config) -> CheckResult {
let mem = match crate::memory::create_memory(
&config.memory,
&config.workspace_dir,
config.api_key.as_deref(),
) {
Ok(m) => m,
Err(e) => return CheckResult::fail("memory", format!("cannot create backend: {e}")),
};
let test_key = "__selftest_probe__";
let test_value = "selftest_ok";
if let Err(e) = mem
.store(
test_key,
test_value,
crate::memory::MemoryCategory::Core,
None,
)
.await
{
return CheckResult::fail("memory", format!("write failed: {e}"));
}
match mem.recall(test_key, 1, None).await {
Ok(entries) if !entries.is_empty() => {
let _ = mem.forget(test_key).await;
CheckResult::pass("memory", "write/read/delete round-trip OK")
}
Ok(_) => {
let _ = mem.forget(test_key).await;
CheckResult::fail("memory", "no entries returned after round-trip")
}
Err(e) => {
let _ = mem.forget(test_key).await;
CheckResult::fail("memory", format!("read failed: {e}"))
}
}
}
async fn check_websocket_handshake(config: &crate::config::Config) -> CheckResult {
let port = config.gateway.port;
let host = if config.gateway.host == "[::]" || config.gateway.host == "0.0.0.0" {
"127.0.0.1"
} else {
&config.gateway.host
};
let url = format!("ws://{host}:{port}/ws/chat");
match tokio_tungstenite::connect_async(&url).await {
Ok((_, _)) => CheckResult::pass("websocket", format!("handshake OK at {url}")),
Err(e) => CheckResult::fail("websocket", format!("handshake failed at {url}: {e}")),
}
}

Some files were not shown because too many files have changed in this diff Show More