Compare commits

...

113 Commits

Author SHA1 Message Date
Argenis c5f0155061 Merge pull request #4193 from zeroclaw-labs/fix/reaction-tool
fix(tools): pass platform channel_id to reaction trait
2026-03-21 21:38:32 -04:00
argenis de la rosa 9ee06ed6fc merge: resolve conflicts with master (image_gen + sessions) 2026-03-21 21:18:46 -04:00
Argenis ac6b43e9f4 fix: remove unused channel_names field from DiscordHistoryChannel (#4199)
* feat: add discord history logging and search tool with persistent channel cache

* fix: remove unused channel_names field from DiscordHistoryChannel

The channel_names HashMap was declared and initialized but never used.
Channel name caching is handled via discord_memory.get()/store() with
the cache:channel_name: prefix. Remove the dead field.

* style: run cargo fmt on discord_history.rs

---------

Co-authored-by: ninenox <nisit15@hotmail.com>
2026-03-21 21:15:23 -04:00
Argenis 6c5573ad96 Merge pull request #4194 from zeroclaw-labs/fix/session-messaging-tools
fix(security): add enforcement and validation to session tools
2026-03-21 21:15:17 -04:00
Argenis 1d57a0d1e5 fix(web/tools): improve a11y in collapsible section headings (#4197)
* fix(web/tools): make section headings collapsible

Agent Tools and CLI Tools section headings were static divs with no
way to collapse sections the user is not interested in, making the
page unwieldy with a large tool set.

- Convert both section heading divs to button elements toggling
  agentSectionOpen / cliSectionOpen state (both default open)
- Section content renders conditionally on those booleans
- ChevronsUpDown icon added (already in lucide-react bundle) that
  fades in on hover and indicates collapsed/expanded state
- No change to individual tool card parameter schema expand/collapse

Risk: Low — UI state only, no API or logic change.
Does not change: search/filter behaviour, tool card expand/collapse,
CLI tools table structure.

* fix(web/tools): improve a11y and fix invalid HTML in collapsible sections

- Replace <h2> inside <button> with <span role="heading" aria-level={2}>
  to fix invalid HTML (heading elements not permitted in interactive content)
- Add aria-expanded attribute to section toggle buttons for screen readers
- Add aria-controls + id linking buttons to their controlled sections
- Replace ChevronsUpDown with ChevronDown icon — ChevronsUpDown is
  visually symmetric so rotating 180deg has no visible effect; ChevronDown
  rotating to -90deg gives a clear directional cue
- Remove unused ChevronsUpDown import

---------

Co-authored-by: WareWolf-MoonWall <chris.hengge@gmail.com>
2026-03-21 21:02:10 -04:00
Argenis 9780c7d797 fix: restrict free command to Linux-only in security policy (#4198)
* fix: resolve claude-code test flakiness and update security policy

* fix: restrict `free` command to Linux-only in security policy

`free` is not available on macOS or other BSDs. Move it behind
a #[cfg(target_os = "linux")] gate so it is only included in the
default allowed commands on Linux systems.

---------

Co-authored-by: ninenox <nisit15@hotmail.com>
2026-03-21 21:02:05 -04:00
Argenis 35a5451a17 fix(channels): address critical security bugs in Gmail Pub/Sub push (#4200)
* feat(channels): add Gmail Pub/Sub push notifications for real-time email

Add GmailPushChannel that replaces IMAP polling with Google's Pub/Sub
push notification system for real-time email-driven automation.

- New channel at src/channels/gmail_push.rs implementing the Channel trait
- Registers Gmail watch subscription (POST /gmail/v1/users/me/watch)
  with automatic renewal before the 7-day expiry
- Handles incoming Pub/Sub notifications at POST /webhook/gmail
- Fetches new messages via Gmail History API (startHistoryId-based)
- Dispatches email messages to the agent with full metadata
- Sends replies via Gmail messages.send API
- Config: gmail_push.enabled, topic, label_filter, oauth_token,
  allowed_senders, webhook_url
- OAuth token encrypted at rest via existing secret store
- Webhook endpoint added to gateway router
- 30+ unit tests covering notification parsing, header extraction,
  body decoding, sender allowlist, and config serialization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): fix extract_body_text_plain test

Gmail API sends base64url without padding. The decode_body function
converted URL-safe chars back to standard base64 but did not restore
the padding, causing STANDARD decoder to fail and falling back to
snippet. Add padding restoration before decoding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): address critical security bugs in Gmail Pub/Sub push

- Add webhook authentication via shared secret (webhook_secret config
  field or GMAIL_PUSH_WEBHOOK_SECRET env var), preventing unauthorized
  message injection through the unauthenticated webhook endpoint
- Add 1MB body size limit on webhook endpoint to prevent memory exhaustion
- Fix race condition in handle_notification: hold history_id lock across
  the read-fetch-update cycle to prevent duplicate message processing
  when concurrent webhook notifications arrive
- Sanitize RFC 2822 headers (To/Subject) to prevent CRLF injection
  attacks that could add arbitrary headers to outgoing emails
- Fix extract_email_from_header panic on malformed angle brackets by
  using rfind('>') and validating bracket ordering
- Add 30s default HTTP client timeout for all Gmail API calls,
  preventing indefinite hangs
- Clone tx sender before message processing loop to avoid holding
  the mutex lock across network calls

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 20:59:56 -04:00
Argenis 8e81d44d54 fix(gateway): address critical security and reliability bugs in Live Canvas (#4196)
* feat(gateway): add Live Canvas (A2UI) tool and real-time web viewer

Add a Live Canvas system that enables the agent to push rendered content
(HTML, SVG, Markdown, text) to a web-visible canvas in real time.

Backend:
- src/tools/canvas.rs: CanvasTool with render/snapshot/clear/eval actions,
  backed by a shared CanvasStore (Arc<RwLock<HashMap>>) with per-canvas
  broadcast channels for real-time updates
- src/gateway/canvas.rs: REST endpoints (GET/POST/DELETE /api/canvas/:id,
  GET /api/canvas/:id/history, GET /api/canvas) and WebSocket endpoint
  (WS /ws/canvas/:id) for real-time frame delivery

Frontend:
- web/src/pages/Canvas.tsx: Canvas viewer page with WebSocket connection,
  iframe sandbox rendering, canvas switcher, frame history panel

Registration:
- CanvasTool registered in all_tools_with_runtime (always available)
- Canvas routes wired into gateway router
- CanvasStore added to AppState
- Canvas page added to App.tsx router and Sidebar navigation
- i18n keys added for en/zh/tr locales

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): share CanvasStore between tool and REST API

The CanvasTool and gateway AppState each created their own CanvasStore,
so content rendered via the tool never appeared in the REST API.

Create the CanvasStore once in the gateway, pass it to
all_tools_with_runtime via a new optional parameter, and reuse the
same instance in AppState.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): address critical security and reliability bugs in Live Canvas

- Validate content_type in REST POST endpoint against allowed set,
  preventing injection of "eval" frames via the REST API
- Enforce MAX_CONTENT_SIZE (256KB) limit on REST POST endpoint,
  matching tool-side validation to prevent memory exhaustion
- Add MAX_CANVAS_COUNT (100) limit to prevent unbounded canvas creation
  and memory exhaustion from CanvasStore
- Handle broadcast RecvError::Lagged in WebSocket handler gracefully
  instead of disconnecting the client
- Make MAX_CONTENT_SIZE and ALLOWED_CONTENT_TYPES pub for gateway reuse
- Update CanvasStore::render and subscribe to return Option for
  canvas count enforcement

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: rareba <rareba@users.noreply.github.com>
2026-03-21 20:59:18 -04:00
Argenis 86ad0c6a2b fix(channels): address critical bugs in voice wake word detection (#4191)
* feat(channels): add voice wake word detection channel

Add VoiceWakeChannel behind the `voice-wake` feature flag that:
- Captures audio from the default microphone via cpal
- Uses energy-based VAD to detect speech activity
- Transcribes speech via the existing transcription API (Whisper)
- Checks for a configurable wake word in the transcription
- On detection, captures the following utterance and dispatches it
  as a ChannelMessage

State machine: Listening -> Triggered -> Capturing -> Processing -> Listening

Config keys (under [channels_config.voice_wake]):
- wake_word (default: "hey zeroclaw")
- silence_timeout_ms (default: 2000)
- energy_threshold (default: 0.01)
- max_capture_secs (default: 30)

Includes tests for config parsing, state machine, RMS energy
computation, and WAV encoding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): exclude voice-wake from all-features CI check

Add a `ci-all` meta-feature in Cargo.toml that includes every feature
except `voice-wake`, which requires `libasound2-dev` (ALSA) not present
on CI runners. Update the check-all-features CI job to use
`--features ci-all` instead of `--all-features`.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): address critical bugs in voice wake word detection

- Replace std::mem::forget(stream) with dedicated thread that holds the
  cpal stream and shuts down cleanly via oneshot channel, preventing
  microphone resource leaks on task cancellation
- Add config validation: energy_threshold must be positive+finite,
  silence_timeout_ms >= 100ms, max_capture_secs clamped to 300
- Guard WAV encoding against u32 overflow for large audio buffers
- Add hard cap on capture_buf size to prevent unbounded memory growth
- Increase audio channel buffer from 4 to 64 slots to reduce chunk
  drops during transcription API calls
- Remove dead WakeState::Processing variant that was never entered

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 20:43:19 -04:00
Argenis 6ecf89d6a9 fix(ci): skip release and publish workflows on forks (#4190)
When a fork syncs with upstream, GitHub attributes the push to the fork
owner, causing release-beta-on-push and publish-crates-auto to run
under the wrong identity — leading to confusing notifications and
guaranteed failures (missing secrets).

Add repository guards to root jobs so the entire pipeline is skipped
on forks.
2026-03-21 20:42:55 -04:00
argenis de la rosa 691efa4d8c style: fix cargo fmt formatting in reaction tool 2026-03-21 20:38:24 -04:00
argenis de la rosa d1e3f435b4 style: fix cargo fmt formatting in session tools 2026-03-21 20:38:08 -04:00
Argenis 44c3e264ad Merge pull request #4192 from zeroclaw-labs/fix/image-gen-tool
fix(tools): harden image_gen security and model validation
2026-03-21 20:37:27 -04:00
argenis de la rosa f2b6013329 fix(tools): harden image_gen security enforcement and model validation
- Replace manual can_act()/record_action() with enforce_tool_operation()
  to match the codebase convention used by all other tools (notion,
  memory_forget, claude_code, delegate, etc.), producing consistent
  error messages and avoiding logic duplication.

- Add model parameter validation to prevent URL path traversal attacks
  via crafted model identifiers (e.g. "../../evil-endpoint").

- Add tests for model traversal rejection and filename sanitization.
2026-03-21 20:08:51 -04:00
argenis de la rosa 05d3c51a30 fix(security): add security policy enforcement and input validation to session tools
SessionsSendTool was missing security gate enforcement entirely - any agent
could send messages to any session without security policy checks. Similarly,
SessionsHistoryTool had no security enforcement for reading session data.

Changes:
- Add SecurityPolicy field to SessionsHistoryTool (enforces ToolOperation::Read)
- Add SecurityPolicy field to SessionsSendTool (enforces ToolOperation::Act)
- Add session_id validation to reject empty or non-alphanumeric-only IDs
- Pass security policy from all_tools_with_runtime registration
- Add tests for empty session_id, non-alphanumeric session_id validation
2026-03-21 20:04:44 -04:00
argenis de la rosa 2ceda31ce2 fix(tools): pass platform channel_id to reaction trait instead of channel name
The reaction tool was passing the channel adapter name (e.g. "discord",
"slack") as the first argument to Channel::add_reaction() and
Channel::remove_reaction(), but the trait signature expects a
platform-specific channel_id (e.g. Discord channel snowflake, Slack
channel ID like "C0123ABCD"). This would cause all reaction API calls
to fail at the platform level.

Fixes:
- Add required "channel_id" parameter to the tool schema
- Extract and pass channel_id (not channel_name) to trait methods
- Update tool description to mention the new parameter
- Add MockChannel channel_id capture for test verification
- Add test asserting channel_id (not name) reaches the trait
- Update all existing tests to supply channel_id
2026-03-21 20:01:22 -04:00
Argenis 9069bc3c1f fix(agent): add system prompt budgeting for small-context models (#4185)
For models with small context windows (e.g. glm-4.5-air ~8K tokens),
the system prompt alone can exceed the limit. This adds:

- max_system_prompt_chars config option (default 0 = unlimited)
- compact_context now also compacts the system prompt: skips the
  Channel Capabilities section and shows only tool names
- Truncation with marker when prompt exceeds the budget

Users can set `max_system_prompt_chars = 8000` in [agent] config
to cap the system prompt for small-context models.

Closes #4124
2026-03-21 19:40:21 -04:00
Argenis 9319fe18da fix(approval): support wildcard * in auto_approve and always_ask (#4184)
auto_approve = ["*"] was doing exact string matching, so only the
literal tool name "*" was matched. Users expecting wildcard semantics
had every tool blocked in supervised mode.

Also adds "prompt exceeds max length" to the context-window error
detection hints (fixes GLM/ZAI error 1261 detection).

Closes #4127
2026-03-21 19:38:11 -04:00
Argenis cc454a86c8 fix(install): remove pairing code display from installer (#4176)
The gateway pairing code is now shown in the dashboard, so displaying
it in the installer output is redundant and cluttered (showed 3 codes).
2026-03-21 19:06:37 -04:00
Argenis 256e8ccebf chore: bump version to v0.5.6 (#4174)
Update version across all distribution manifests:
- Cargo.toml / Cargo.lock
- dist/aur/PKGBUILD + .SRCINFO
- dist/scoop/zeroclaw.json
2026-03-21 18:03:38 -04:00
Argenis 72c9e6b6ca fix(publish): publish aardvark-sys dep before main crate (#4172)
* fix(publish): add aardvark-sys version and publish it before main crate

- Add version = "0.1.0" to aardvark-sys path dependency in Cargo.toml
- Update all three publish workflows to publish aardvark-sys first
- Add aardvark-sys COPY to Dockerfile for workspace builds
- Fixes cargo publish failure: "dependency aardvark-sys does not
  specify a version"

* ci: publish aardvark-sys before main crate in all publish workflows

All three crates.io publish workflows now publish aardvark-sys first,
wait for indexing, then publish the main zeroclawlabs crate.
2026-03-21 16:20:50 -04:00
Argenis 755a129ca2 fix(install): use /dev/tty for sudo in curl|bash Xcode license accept (#4169)
When run via `curl | bash`, stdin is the curl pipe, so sudo cannot
prompt for a password. Redirect sudo's stdin from /dev/tty to reach
the real terminal, allowing the password prompt to work in piped
invocations.
2026-03-21 14:15:21 -04:00
Argenis 8b0d3684c5 fix(install): auto-accept Xcode license instead of bailing out (#4165)
Instead of exiting with a manual remediation step, the installer now
attempts to accept the Xcode/CLT license automatically via
`sudo xcodebuild -license accept`. Falls back to a clear error message
only if sudo fails (e.g. no terminal or password).
2026-03-21 13:57:38 -04:00
Giulio V cdb5ac1471 fix(tools): fix remove_reaction_success test
The output format used "{action}ed" which produced "removeed" for the
remove action. Use explicit past-tense mapping instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:49:35 +01:00
Giulio V 67acb1a0bb fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:10:05 +01:00
Giulio V 9eac6bafef fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:09:48 +01:00
Giulio V a12f2ff439 fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:09:36 +01:00
Argenis a38a4d132e fix(hardware): drain stdin in subprocess test to prevent broken pipe flake (#4161)
* fix(hardware): drain stdin in subprocess test to prevent broken pipe flake

The test script did not consume stdin, so SubprocessTool's stdin write
raced against the process exit, causing intermittent EPIPE failures.
Add `cat > /dev/null` to drain stdin before producing output.

* style: format subprocess test
2026-03-21 12:19:53 -04:00
Argenis 48aba73d3a fix(install): always check Xcode license on macOS, not just with --install-system-deps (#4153)
The Xcode license test-compile was inside install_system_deps(), which
only runs when --install-system-deps is passed. On macOS the default
path skipped this entirely, so users hit `cc` exit code 69 deep in
cargo build. Move the check into the unconditional main flow so it
always fires on Darwin.
2026-03-21 11:29:36 -04:00
Argenis a1ab1e1a11 fix(install): use test-compile instead of xcrun for Xcode license detection (#4151)
xcrun --show-sdk-path can succeed even when the Xcode/CLT license has
not been accepted, so the previous check was ineffective. Replace it
with an actual test-compilation of a trivial C file, which reliably
triggers the exit-code-69 failure when the license is pending.
2026-03-21 11:03:07 -04:00
Giulio V f394abf35c feat(tools): add standalone image generation tool via fal.ai
Add ImageGenTool that exposes fal.ai Flux model image generation as a
standalone tool, decoupled from the LinkedIn client. The tool accepts a
text prompt, optional filename/size/model parameters, calls the fal.ai
synchronous API, downloads the result, and saves to workspace/images/.

- New src/tools/image_gen.rs with full Tool trait implementation
- New ImageGenConfig in schema.rs (enabled, default_model, api_key_env)
- Config-gated registration in all_tools_with_runtime
- Security: checks can_act() and record_action() before execution
- Comprehensive unit tests (prompt validation, API key, size enum,
  autonomy blocking, tool spec)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:17:28 +01:00
Giulio V 52e0271bd5 feat(tools): add emoji reaction tool for cross-channel reactions
Add ReactionTool that exposes Channel::add_reaction and
Channel::remove_reaction as an agent-callable tool. Uses a
late-binding ChannelMapHandle (Arc<RwLock<HashMap>>) pattern
so the tool can be constructed during tool registry init and
populated once channels are available in start_channels.

Parameters: channel, message_id, emoji, action (add/remove).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:15:25 +01:00
Giulio V 6c0a48efff feat(tools): add session list, history, and send tools for inter-agent messaging
Add three new tools in src/tools/sessions.rs:
- sessions_list: lists active sessions with channel, message count, last activity
- sessions_history: reads last N messages from a session by ID
- sessions_send: appends a message to a session for inter-agent communication

All tools operate on the SessionBackend trait, using the JSONL SessionStore
by default. Registered unconditionally in all_tools_with_runtime when the
sessions directory is accessible.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:07:18 +01:00
SimianAstronaut7 87b5bca449 feat(config): add configurable pacing controls for slow/local LLM workloads (#3343)
* feat(config): add configurable pacing controls for slow/local LLM workloads (#2963)

Add a new `[pacing]` config section with four opt-in parameters that
let users tune timeout and loop-detection behavior for local LLMs
(Ollama, llama.cpp, vLLM) without disabling safety features entirely:

- `step_timeout_secs`: per-step LLM inference timeout independent of
  the overall message budget, catching hung model responses early.
- `loop_detection_min_elapsed_secs`: time-gated loop detection that
  only activates after a configurable grace period, avoiding false
  positives on long-running browser/research workflows.
- `loop_ignore_tools`: per-tool loop-detection exclusions so tools
  like `browser_screenshot` that structurally resemble loops are not
  counted toward identical-output detection.
- `message_timeout_scale_max`: overrides the hardcoded 4x ceiling in
  the channel message timeout scaling formula.

All parameters are strictly optional with no effect when absent,
preserving full backwards compatibility.

Closes #2963

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(config): add missing pacing fields in tests and call sites

* fix(config): add pacing arg to remaining cost-tracking test call sites

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-21 08:54:08 -04:00
Argenis be40c0c5a5 Merge pull request #4145 from zeroclaw-labs/feat/gateway-path-prefix
feat(gateway): add path_prefix for reverse-proxy deployments
2026-03-21 08:48:56 -04:00
argenis de la rosa 6527871928 fix: add path_prefix to test AppState in gateway/api.rs 2026-03-21 08:14:28 -04:00
argenis de la rosa 0bda80de9c feat(gateway): add path_prefix for reverse-proxy deployments
Adopted from #3709 by @slayer with minor cleanup.
Supersedes #3709
2026-03-21 08:14:28 -04:00
Argenis 02f57f4d98 Merge pull request #4144 from zeroclaw-labs/feat/claude-code-tool
feat(tools): add ClaudeCodeTool for two-tier agent delegation
2026-03-21 08:14:19 -04:00
Argenis ef83dd44d7 Merge pull request #4146 from zeroclaw-labs/feat/memory-recall-time-range
feat(memory): add time range filter to recall (since/until)
2026-03-21 08:14:12 -04:00
Argenis a986b6b912 fix(install): detect un-accepted Xcode license + bump to v0.5.5 (#4147)
* fix(install): detect un-accepted Xcode license before build

Add an xcrun check after verifying Xcode CLT is installed. When the
Xcode/CLT license has not been accepted, cc exits with code 69 and
the build fails with a cryptic linker error. This surfaces a clear
message telling the user to run `sudo xcodebuild -license accept`.

* chore(release): bump version to v0.5.5

Update version across all distribution manifests:
- Cargo.toml and Cargo.lock
- dist/aur/PKGBUILD and .SRCINFO
- dist/scoop/zeroclaw.json
2026-03-21 08:09:27 -04:00
SimianAstronaut7 b6b1186e3b feat(channel): add per-channel proxy_url support for HTTP/SOCKS5 proxies (#3345)
* feat(channel): add per-channel proxy_url support for HTTP/SOCKS5 proxies

Allow each channel to optionally specify a `proxy_url` in its config,
enabling users behind restrictive networks to route channel traffic
through HTTP or SOCKS5 proxies. When set, the per-channel proxy takes
precedence over the global `[proxy]` config; when absent, the channel
falls back to the existing runtime proxy behavior.

Adds `proxy_url: Option<String>` to all 12 channel config structs
(Telegram, Discord, Slack, Mattermost, Signal, WhatsApp, Wati,
NextcloudTalk, DingTalk, QQ, Lark, Feishu) and introduces
`build_channel_proxy_client`, `build_channel_proxy_client_with_timeouts`,
and `apply_channel_proxy_to_builder` helpers that normalize proxy URLs
and integrate with the existing client cache.

Closes #3262

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(channel): add missing proxy_url fields in test initializers

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-21 07:53:20 -04:00
SimianAstronaut7 00dc0c8670 feat(tool): enrich delegate sub-agent system prompt and add skills_directory config key (#3344)
* feat(tool): enrich delegate sub-agent system prompt and add skills_directory config key (#3046)

Sub-agents configured under [agents.<name>] previously received only the
bare system_prompt string. They now receive a structured system prompt
containing: tools section (allowed tools with parameters and invocation
protocol), skills section (from scoped or default directory), workspace
path, current date/time, safety constraints, and shell policy when shell
is in the effective tool list.

Add optional skills_directory field to DelegateAgentConfig for per-agent
scoped skill loading. When unset, falls back to default workspace
skills/ directory.

Closes #3046

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tools): add missing fields after rebase

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-21 07:53:02 -04:00
argenis de la rosa 43f2a0a815 fix: add ClaudeCodeConfig to config re-exports and fix formatting 2026-03-21 07:51:36 -04:00
argenis de la rosa 50b5bd4d73 ci: retrigger CI after stuck runners 2026-03-21 07:46:34 -04:00
argenis de la rosa 8c074870a1 fix(memory): replace redundant closures with function references
Clippy flagged `.map(|s| chrono::DateTime::parse_from_rfc3339(s))` as
redundant — use `.map(chrono::DateTime::parse_from_rfc3339)` directly.
2026-03-21 07:46:34 -04:00
argenis de la rosa 61d1841ce3 fix: update gateway mock Memory impls with since/until params
Both test mock implementations of Memory::recall() in gateway/mod.rs
were missing the new since/until parameters.
2026-03-21 07:46:34 -04:00
argenis de la rosa eb396cf38f feat(memory): add time range filter to recall (since/until)
Adopted from #3705 by @fangxueshun with fixes:
- Added input validation for date strings (RFC 3339)
- Used chrono DateTime comparison instead of string comparison
- Added since < until validation
- Updated mem0 backend
Supersedes #3705
2026-03-21 07:46:34 -04:00
argenis de la rosa 9f1657b9be fix(tools): use kill_on_drop for ClaudeCodeTool subprocess timeout 2026-03-21 07:46:24 -04:00
argenis de la rosa 8fecd4286c fix(tools): use kill_on_drop for ClaudeCodeTool subprocess timeout
Fixes E0382 borrow-after-move error: wait_with_output() consumed the
child handle, making child.kill() in the timeout branch invalid.
Use kill_on_drop(true) with cmd.output() instead.
2026-03-21 07:46:24 -04:00
argenis de la rosa df21d92da3 feat(tools): add ClaudeCodeTool for two-tier agent delegation
Adopted from #3748 by @ilyasubkhankulov with fixes:
- Removed unused _runtime field
- Fixed subprocess timeout handling
- Excluded unrelated Slack threading and Dockerfile changes

Closes #3748 (superseded)
2026-03-21 07:46:24 -04:00
Argenis 8d65924704 fix(channels): add cost tracking and enforcement to all channels (#4143)
Adds per-channel cost tracking via task-local context in the tool call
loop. Budget enforcement blocks further API calls when limits are
exceeded. Resolves merge conflicts with model-switch retry loop,
reply_target parameter, and autonomy level additions on master.

Supersedes #3758
2026-03-21 07:37:15 -04:00
Argenis 756c3cadff feat(transcription): add LocalWhisperProvider for self-hosted STT (TDD) (#4141)
Self-hosted Whisper-compatible STT provider that POSTs audio to a
configurable HTTP endpoint (e.g. faster-whisper over WireGuard). Audio
never leaves the platform perimeter.

Implemented via red/green TDD cycles:
  Wave 1 — config schema: LocalWhisperConfig struct, local_whisper field
    on TranscriptionConfig + Default impl, re-export in config/mod.rs
  Wave 2 — from_config validation: url non-empty, url parseable, bearer_token
    non-empty, max_audio_bytes > 0, timeout_secs > 0; returns Result<Self>
  Wave 3 — manager integration: registration with ? propagation (not if let Ok
    — credentials come directly from config, no env-var fallback; present
    section with bad values is a hard error, not a silent skip)
  Wave 4 — transcribe(): resolve_audio_format() extracted from validate_audio()
    so LocalWhisperProvider can resolve MIME without the 25 MB cloud cap;
    size check + format resolution before HTTP send
  Wave 5 — HTTP mock tests: success response, bearer auth header, 503 error

33 tests (20 baseline + 13 new), all passing. Clippy clean.

Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
2026-03-21 07:15:36 -04:00
Argenis ee870028ff feat(channel): use Slack native markdown blocks for rich formatting (#4142)
Slack's Block Kit supports a native `markdown` block type that accepts
standard Markdown and handles rendering. This removes the need for a
custom Markdown-to-mrkdwn converter. Messages over 12,000 chars fall
back to plain text.

Co-authored-by: Joe Hoyle <joehoyle@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 07:12:27 -04:00
frido22 83183a39a5 feat(status): show service running state in zeroclaw status (#3751) 2026-03-21 06:49:47 -04:00
shiben 7a941fb753 feat(auth): add import functionality for existing OpenAI Codex auth p… (#3762)
* feat(auth): add import functionality for existing OpenAI Codex auth profiles

Introduces a new command-line option to import an existing `auth.json` file for OpenAI Codex, allowing users to bypass the login flow. The import feature reads and parses the specified JSON file, extracting authentication tokens and storing them in the user's profile. This change enhances user experience by simplifying the authentication process for existing users.

- Added `import` option to `AuthCommands` enum
- Implemented `import_openai_codex_auth_profile` function to handle the import logic
- Updated `handle_auth_command` to process the import option and validate provider compatibility
- Ensured that the import feature is exclusive to the `openai-codex` provider

* feat(auth): extract expiry from JWT in OpenAI Codex import

Enhances the `import_openai_codex_auth_profile` function by extracting the expiration date from the JWT access token. This change allows for more accurate management of token lifetimes by replacing the hardcoded expiration date with a dynamic value derived from the token itself.

- Added `extract_expiry_from_jwt` function to handle JWT expiration extraction
- Updated `TokenSet` to use the extracted expiration date instead of a static value
2026-03-21 06:49:44 -04:00
Argenis bcdbce0bee feat(web): add theme system with CSS variables and settings modal (#4133)
- Add ThemeContext with light/dark/system theme support
- Migrate all hardcoded colors to CSS variables
- Add SettingsModal for theme customization
- Add font loader for dynamic font selection
- Add i18n support for Chinese and Turkish locales
- Fix accessibility: add aria-live to pairing error message

Co-authored-by: nanyuantingfeng <nanyuantingfeng@163.com>
2026-03-21 06:22:30 -04:00
Argenis abb844d7f8 fix(config): add missing WhatsApp Web policy config keys (#4131)
* fix(config): add missing WhatsApp Web policy config keys (mode, dm_policy, group_policy, self_chat_mode)

* fix(onboard): add missing WhatsApp policy fields to wizard struct literals

The new mode, dm_policy, group_policy, and self_chat_mode fields added
to WhatsAppConfig need default values in the onboard wizard's struct
initializers to avoid E0063 compilation errors.
2026-03-21 06:04:21 -04:00
Argenis 48733d5ee2 feat(cron): add Edit button and modal for updating cron jobs (#4132)
- Backend: add PATCH /api/cron/{id} handler (handle_api_cron_patch)
  using update_shell_job_with_approval with approved=false; validates
  job exists (404 on miss), accepts name/schedule/command patch fields
- Router: register PATCH on /api/cron/{id} alongside existing DELETE
- Frontend API: add patchCronJob(id, patch) calling PATCH /api/cron/{id}
- i18n: add cron.edit, cron.edit_modal_title, cron.edit_error,
  cron.saving, cron.save keys to all 3 locales (zh, en, tr)
- UI: Edit (Pencil) button in Actions column opens a pre-populated modal
  with the job's current name, schedule expression, and command;
  submitting PATCHes the job and updates the table row in-place

Co-authored-by: WareWolf-MoonWall <chris.hengge@gmail.com>
2026-03-21 05:50:23 -04:00
Argenis 2d118af78f fix(channels): wire model_switch callback into channel inference path (#4130)
The channel path in `src/channels/mod.rs` was passing `None` as the
`model_switch_callback` to `run_tool_call_loop()`, which meant model
switching via the `model_switch` tool was silently ignored in channel
mode.

Wire the callback in following the same pattern as the CLI path:
- Pass `Some(model_switch_callback.clone())` instead of `None`
- Wrap the tool call loop in a retry loop
- Handle `ModelSwitchRequested` errors by re-creating the provider
  with the new model and retrying

Fixes #4107
2026-03-21 05:43:21 -04:00
Argenis 8d7e7e994e fix(memory): use plain OS threads for postgres operations to avoid nested runtime panic (#4129)
Replace `tokio::task::spawn_blocking()` with plain `std::thread::Builder`
OS threads in all PostgresMemory trait methods. The sync `postgres` crate
(v0.19.x) internally calls `Runtime::block_on()`, which panics when called
from Tokio's blocking pool threads in recent Tokio versions. Plain OS threads
have no runtime context, so the nested `block_on` succeeds.

This matches the pattern already used in `PostgresMemory::initialize_client()`,
which correctly used `std::thread::Builder` and never exhibited this bug.

A new `run_on_os_thread` helper centralizes the pattern: spawn an OS thread,
run the closure, and bridge the result back via a `tokio::sync::oneshot` channel.

Fixes #4101
2026-03-21 05:33:55 -04:00
Joe Hoyle d38d706c8e feat(channel): add Slack Assistants API status indicators (#4105)
Implement start_typing/stop_typing for Slack using the Assistants API
assistant.threads.setStatus method. Tracks thread context from
assistant_thread_started events and inbound messages, then sets
"is thinking..." status during processing. Status auto-clears when
the bot sends a reply via chat.postMessage.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 05:32:31 -04:00
Chris Hengge 523188da08 feat(tools): add WeatherTool with wttr.in integration (#4104)
Implements the Weather integration as a native Rust Tool trait
implementation, consistent with the existing first-party tool
architecture (no WASM/plugin layer required).

- Add src/tools/weather_tool.rs with full WeatherTool impl
  - Fetches from wttr.in ?format=j1 (no API key, global coverage)
  - Supports city names (any language/script), IATA airport codes,
    GPS coordinates, postal/zip codes, domain-based geolocation
  - Metric (°C, km/h, mm) and imperial (°F, mph, in) units
  - Current conditions + 0-3 day forecast with hourly breakdown
  - Graceful error messages for unknown/invalid locations
  - Respects runtime proxy config via apply_runtime_proxy_to_builder
  - 36 unit tests: schema, URL building, param validation, formatting
- Register WeatherTool unconditionally in all_tools_with_runtime
  (no API key needed, no config gate — same pattern as CalculatorTool)
- Flip integrations registry Weather entry from ComingSoon to Available

Closes #<issue>
2026-03-21 05:32:28 -04:00
Baha Abu Nojaim 82f7fbbe0f feat(providers): add DeepMyst as OpenAI-compatible provider (#4103)
Register DeepMyst (https://deepmyst.com) as an OpenAI-compatible
provider with Bearer auth and DEEPMYST_API_KEY env var support.
Aliases: "deepmyst", "deep-myst".

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-21 05:32:26 -04:00
Caleb c1b2fceca5 fix(onboard): make tmux paste safe for text prompts (#4106) 2026-03-21 05:14:37 -04:00
Loc Nguyen Huu be6e9fca5d fix(docker): align workspace with Cargo.lock for --locked builds (#4126)
The builder used sed to remove crates/robot-kit from [workspace].members because that path was not copied into the image. Cargo.lock is still generated for the full workspace (including zeroclaw-robot-kit), so the manifest and lockfile disagreed. cargo build --release --locked then tried to rewrite Cargo.lock and failed with "cannot update the lock file ... because --locked was passed" (commonly hit when ZEROCLAW_CARGO_FEATURES includes memory-postgres).

Copy crates/robot-kit/ into the image and drop the sed step so the workspace matches the committed lockfile.

Made-with: Cursor

Co-authored-by: lokinh <locnh@uniultra.xyz>
2026-03-21 05:14:35 -04:00
Greg Lamberson 75c11dfb92 fix(config): prevent test suite from clobbering active_workspace.toml (#4121)
* fix(config): prevent test suite from clobbering active_workspace.toml

Refactor persist_active_workspace_config_dir() to accept the default
config directory as an explicit parameter instead of reading HOME
internally. This eliminates a hidden dependency on process-wide
environment state that caused test-suite runs to overwrite the real
user's active_workspace.toml with a stale temp-directory path.

The temp-directory guard is now unconditional (previously gated behind
cfg(not(test))). It rejects writes only when a temp config_dir targets
a non-temp default location, so test-to-test writes within temp dirs
still succeed.

Closes #4117

* fix: remove needless borrow on default_config_dir parameter

---------

Co-authored-by: lamco-office <office@lamco.io>
2026-03-21 05:14:32 -04:00
tf4fun 48270fbbf3 fix(cron): add qq to supported delivery channel whitelist (#4120)
The channel validation in `validate_announce_delivery` was missing `qq`,
causing API-created cron jobs with QQ delivery to be rejected.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 05:14:30 -04:00
Argenis 18a456b24e fix(mcp): wire MCP tools into WebSocket chat and gateway /api/tools (#4096)
* fix(mcp): wire MCP tools into WebSocket chat path and gateway /api/tools

Agent::from_config() did not initialize MCP tools because it was
synchronous and MCP connection requires async. The gateway tool
registry built for /api/tools also missed MCP tools for the same
reason.

Changes:
- Make Agent::from_config() async so it can call McpRegistry::connect_all()
- Add MCP tool initialization (both eager and deferred modes) to
  from_config(), following the same pattern used in loop_.rs CLI/webhook paths
- Add MCP tool initialization to the gateway's tool registry so
  /api/tools reflects MCP tools
- Update all three call sites (run(), handle_socket, test) to await

Closes #4042

* fix: merge master and fix formatting

* fix: remove underscore prefix from used bindings (clippy)
2026-03-21 05:13:01 -04:00
ehu shubham shaw 71e89801b5 feat(hardware): add RPi GPIO, Aardvark I2C/SPI/GPIO, and hardware plugin system (#4125)
* feat(hardware): add RPi GPIO, Aardvark I2C/SPI/GPIO, and hardware plugin system

Extends the hardware subsystem with three clusters of functionality,
all feature-gated (hardware / peripheral-rpi) with no impact on default builds.

Raspberry Pi native support:
- src/hardware/rpi.rs: board self-discovery (model, serial, revision),
  sysfs GPIO pin read/write, and ACT LED control
- scripts/99-act-led.rules: udev rule for non-root ACT LED access
- scripts/deploy-rpi.sh, scripts/rpi-config.toml, scripts/zeroclaw.service:
  one-shot deployment helper and systemd service template

Total Phase Aardvark USB adapter (I2C / SPI / GPIO):
- crates/aardvark-sys/: new workspace crate with FFI bindings loaded at
  runtime via libloading; graceful stub fallback when .so is absent or
  arch mismatches (Rosetta 2 detection)
- src/hardware/aardvark.rs: AardvarkTransport implementing Transport trait
- src/hardware/aardvark_tools.rs: agent tools i2c_scan, i2c_read,
  i2c_write, spi_transfer, gpio_aardvark
- src/hardware/datasheet.rs: datasheet search/download for detected devices
- docs/aardvark-integration.md, examples/hardware/aardvark/: guide + examples

Hardware plugin / ToolRegistry system:
- src/hardware/tool_registry.rs: ToolRegistry for hardware module tool sets
- src/hardware/loader.rs, src/hardware/manifest.rs: manifest-driven loader
- src/hardware/subprocess.rs: subprocess execution helper for board I/O
- src/gateway/hardware_context.rs: POST /api/hardware/reload endpoint
- src/hardware/mod.rs: exports all new modules; merge_hardware_tools and
  load_hardware_context_prompt helpers

Integration hooks (minimal surface):
- src/hardware/device.rs: DeviceKind::Aardvark, DeviceRuntime::Aardvark,
  has_aardvark / resolve_aardvark_device on DeviceRegistry
- src/hardware/transport.rs: TransportKind::Aardvark
- src/peripherals/mod.rs: gate create_board_info_tools behind hardware feature
- src/agent/loop_.rs: TOOL_CHOICE_OVERRIDE task-local for Anthropic provider
- src/providers/anthropic.rs: read TOOL_CHOICE_OVERRIDE; add tool_choice field
- Cargo.toml: add aardvark-sys to workspace and as dependency
- firmware/zeroclaw-nucleo/: update Cargo.toml and Cargo.lock

Non-goals:
- No changes to agent orchestration, channels, providers, or security policy
- No new config keys outside existing [hardware] / [peripherals] sections
- No CI workflow changes

Risk: Low. All new paths are feature-gated; aardvark.so loads at runtime
only when present. No schema migrations or persistent state introduced.

Rollback: revert this single commit.

* fix(hardware): resolve clippy and rustfmt CI failures

- struct_excessive_bools: allow on DeviceCapabilities (7 bool fields needed)
- unnecessary_debug_formatting: use .display() instead of {:?} for paths
- stable_sort_primitive: replace .sort() with .sort_unstable() on &str slices

* fix(hardware): add missing serial/uf2/pico modules declared in mod.rs

cargo fmt was exiting with code 1 because mod.rs declared pub mod serial,
uf2, pico_flash, pico_code but those files were missing from the branch.
Also apply auto-formatting to loader.rs.

* fix(hardware): apply rustfmt 1.92.0 formatting (matches CI toolchain)

* docs(scripts): add RPi deployment and interaction guide

* push

* feat(firmware): add initial Pico firmware and serial device handling

- Introduced main.py for ZeroClaw Pico firmware with a placeholder for MicroPython implementation.
- Added binary UF2 file for Pico deployment.
- Implemented serial device enumeration and validation in the hardware module, enhancing security by restricting allowed serial paths.
- Updated related modules to integrate new serial device functionality.

---------

Co-authored-by: ehushubhamshaw <eshaw1@wpi.edu>
2026-03-21 04:17:01 -04:00
Argenis 46f6e79557 fix(gateway): improve error message for Docker bridge connectivity (#4095)
When the gateway security guard blocks a public bind address, the error
message now mentions the Docker use case and provides clear instructions
for connecting from Docker containers.

Closes #4086
2026-03-21 00:16:01 -04:00
Argenis c301b1d4d0 fix(approval): auto-approve read-only tools in non-interactive mode (#4094)
* fix(approval): auto-approve read-only tools in non-interactive mode

Add web_search_tool, web_fetch, calculator, glob_search, content_search,
and image_info to the default auto_approve list. These are read-only tools
with no side effects that were being silently denied in channel mode
(Telegram, Slack, etc.) because the non-interactive ApprovalManager
auto-denies any tool not in auto_approve when autonomy != full.

Closes #4083

* fix: remove duplicate default_otp_challenge_max_attempts function
2026-03-20 23:57:43 -04:00
Argenis 981a93d942 fix(config): remove duplicate default_otp_challenge_max_attempts function (#4098)
The function was defined twice in schema.rs after merging #3921, causing
compilation failures on all downstream branches.
2026-03-20 23:57:27 -04:00
Argenis 34f0b38e42 fix(config): remove duplicate default_otp_challenge_max_attempts function (#4097)
PR #3921 accidentally introduced a duplicate definition of
default_otp_challenge_max_attempts() in config/schema.rs, causing
compilation to fail on master (E0428: name defined multiple times).
2026-03-20 18:49:15 -04:00
khhjoe 00209dd899 feat(memory): add mem0 (OpenMemory) backend integration (#3965)
* feat(memory): add mem0 (OpenMemory) backend integration

- Implement Mem0Memory struct with full Memory trait
- Add history() audit trail, recall_filtered() with time/metadata filters
- Add store_procedural() for conversation trace extraction
- Add ProceduralMessage type to Memory trait with default no-op
- Feature-gated behind `memory-mem0` flag
- 9 unit tests covering edge cases

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: apply cargo fmt

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(memory): add extraction_prompt config, deploy scripts, and timing instrumentation

- Add `extraction_prompt` field to `Mem0Config` for custom LLM fact
  extraction prompts (e.g. Cantonese/Chinese content), with
  `MEM0_EXTRACTION_PROMPT` env var fallback
- Pass `custom_instructions` in mem0 store requests so the server
  uses the client-supplied prompt over its default
- Add timing instrumentation to channel message pipeline
  (mem_recall_ms, elapsed_before_llm_ms, llm_call_ms, total_ms)
- Add `deploy/mem0/` with self-hosted mem0 + reranker GPU server
  scripts, fully configurable via environment variables
- Update config reference docs (EN, zh-CN, VI) with `[memory.mem0]`
  subsection

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

# Conflicts:
#	src/channels/mod.rs

* chore: remove accidentally staged worktree from index

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 18:22:44 -04:00
caoy 9e8a478254 fix(gemini): use default chat() for prompt-guided tool calling and add vision support (#3932)
* fix(gemini): use default chat() for prompt-guided tool calling and add multimodal vision support

The Gemini provider's chat() override bypassed the default trait
implementation that injects tool definitions into the system prompt for
providers without native tool calling. This caused the agent loop to see
tool definitions in context but never actually invoke them, resulting in
hallucinated tool calls (e.g. claiming "Stored" without calling
memory_store).

Remove the broken chat() override so the default prompt-guided fallback
in the Provider trait handles tool injection correctly. Add an explicit
capabilities() declaration (native_tool_calling: false, vision: true).

Also add multimodal support: convert Part from a plain struct to an
untagged enum with Text and Inline variants, and add build_parts() to
extract [IMAGE:data:...] markers as Gemini inline_data parts.

Includes 14 new tests covering capabilities, Part serialization,
build_parts edge cases, and role-mapping behavior. Removes unused
ChatResponse import.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test(gemini): move capabilities tests to component level and add tool conversion test

Move the 3 capabilities tests (native_tool_calling, vision,
supports_native_tools) from the inline module to
tests/component/gemini_capabilities.rs since they exercise the public
Provider trait contract through the factory. Add a new
convert_tools_returns_prompt_guided test verifying the agent loop will
receive PromptGuided payload for Gemini.

Private internals tests (Part serialization, build_parts, role mapping)
remain inline since those types are not publicly exported.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style(gemini): fix cargo fmt formatting

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(gemini): add prompt_caching field to capabilities declaration

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: myclaw <myclaw@myclaws-MacBook-Air.local>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 18:22:41 -04:00
Anton Markelov 96f25ac701 fix(prompt): respect autonomy level in SafetySection (Agent/gateway WS path) (#3952) (#4037)
The `SafetySection` in `SystemPromptBuilder` always hardcoded
"Do not run destructive commands without asking" and "Do not bypass
oversight or approval mechanisms" regardless of the configured
autonomy level. This caused the gateway WebSocket path (web interface)
to instruct the LLM to simulate approval dialogs even when
`autonomy.level = "full"`.

PRs #3955/#3970/#3975 fixed the channel dispatch path
(`build_system_prompt_with_mode_and_autonomy`) but missed the
`Agent::from_config` → `SystemPromptBuilder` path used by
`gateway/ws.rs`.

Changes:
- Add `autonomy_level` field to `PromptContext`
- Rewrite `SafetySection::build()` to conditionally include/exclude
  approval instructions based on autonomy level, matching the logic
  already present in `build_system_prompt_with_mode_and_autonomy`
- Add `autonomy_level` field to `Agent` struct and `AgentBuilder`
- Pass `config.autonomy.level` through `Agent::from_config`
- Add tests for full/supervised autonomy safety section behavior

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 18:22:35 -04:00
Jacobinwwey fb5c8cb620 feat(tools): route web_search providers with alias fallback (#4038) 2026-03-20 18:22:32 -04:00
ifengqi 9f5543e046 fix(qq): respond to WebSocket Ping frames to prevent connection timeout (#4041)
The QQ channel WebSocket loop did not handle incoming Ping frames,
causing the server to consider the connection dead and drop it. Add a
Ping handler that replies with Pong, keeping the connection alive.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 18:22:30 -04:00
luikore 05c9b8180b Make lark / feishu render markdown (#3866)
Co-authored-by: Luikore <masked>
2026-03-20 18:22:29 -04:00
Thorbjørn Lindeijer f96a0471b5 fix(providers): clamp unsupported temperatures in Claude Code provider (#3961)
The Claude Code CLI only supports temperatures 0.7 and 1.0, but
internal subsystems (memory consolidation, context summarizer) use
lower values like 0.1 and 0.2. Previously the provider rejected these
with a hard error, triggering retries and WARN-level log noise.

Clamp to the nearest supported value instead, since the CLI ignores
temperature anyway.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 18:22:26 -04:00
Artem Chernenko 072f5f1170 fix(transcription): honor configured default provider (#3883)
Co-authored-by: Artem Chernenko <12207348+turboazot@users.noreply.github.com>
2026-03-20 18:22:25 -04:00
Darren.Zeng 28f94ae48c style: enhance .editorconfig with comprehensive file type settings (#3872)
Expand the minimal .editorconfig to include comprehensive settings for
all file types used in the project:

- Add root = true declaration
- Add standard settings: end_of_line, charset, trim_trailing_whitespace,
  insert_final_newline
- Add Rust-specific settings (indent_size = 4, max_line_length = 100)
  to match rustfmt.toml
- Add Markdown settings (preserve trailing whitespace for hard breaks)
- Add TOML, YAML, Python, Shell script, and JSON settings

This ensures consistent editor behavior across all contributors and
matches the project's formatting standards.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-20 18:22:22 -04:00
RoomWithOutRoof 8a217a77f9 fix(config): add challenge_max_attempts field to OtpConfig (#3921)
* fix(docker): default CMD to daemon instead of gateway

- Change default Docker CMD from gateway to daemon in both dev and release stages
- gateway only starts the HTTP/WebSocket server — channel listeners (Matrix, Telegram, Discord, etc.) are never spawned
- daemon starts the full runtime: gateway + channels + heartbeat + scheduler
- Users who configure channels in config.toml and run the default image get no response because the channel sync loops never start

* fix(config): add challenge_max_attempts field to OtpConfig

Add missing challenge_max_attempts field to support OTP challenge
attempt limiting. This field allows users to configure the maximum
number of OTP challenge attempts before lockout.

Fixes #3919

---------

Co-authored-by: Jah-yee <jah.yee@outlook.com>
Co-authored-by: Jah-yee <jahyee@sparklab.ai>
2026-03-20 18:22:19 -04:00
Chris Hengge 79a7f08b04 fix(web): anchor memory table to viewport with dual scrollbars (#4027)
The /memory data grid grew unboundedly with table rows, pushing the
horizontal scrollbar to the very bottom of a tall page and making it
inaccessible without scrolling all the way down first.

- Layout: change outer shell from min-h-screen to h-screen +
  overflow-hidden, and add min-h-0 to <main> so flex-1 overflow-y-auto
  actually clamps at the viewport boundary instead of growing infinitely.
- Memory page: switch root div to flex-col h-full so it fills the
  bounded main area; give the glass-card table wrapper flex-1 min-h-0
  overflow-auto so it consumes remaining space and exposes both
  scrollbars without any page-level scrolling required.
- index.css: pin .table-electric thead th with position:sticky / top:0
  and a matching opaque background so column headers stay visible
  during vertical scroll inside the bounded card.

The result behaves like a bounded iframe: the table fills the available
screen, rows scroll vertically, wide columns scroll horizontally, and
both scrollbars are always reachable.
2026-03-20 18:22:11 -04:00
Argenis de12055364 feat(slack): implement reaction support for Slack channel (#4091)
* feat(slack): implement reaction support with sanitized error responses

Add add_reaction() and remove_reaction() for Slack channel, with
unicode-to-Slack emoji mapping, idempotent error handling, and
sanitized API error responses matching the pattern used by
chat.postMessage.

Based on #4089 by @joehoyle, with sanitize_api_error() applied to
reaction error paths for consistency with existing Slack methods.

Supersedes #4089

* chore(deps): bump rustls-webpki to 0.103.10 (RUSTSEC-2026-0049)
2026-03-20 18:18:45 -04:00
Argenis 65c34966bb Merge pull request #4092 from zeroclaw-labs/docs/aur-ssh-key
docs: remove architecture diagram from all READMEs
2026-03-20 18:02:32 -04:00
Will Sarg 4a2be7c2e5 feat(verifiable_intent): add native verifiable intent lifecycle module (#2938)
* feat(verifiable_intent): add native verifiable intent lifecycle module

Implements a Rust-native Verifiable Intent (VI) subsystem for ZeroClaw,
providing full credential lifecycle support for commerce agent authorization
using SD-JWT layered credentials.

New module: src/verifiable_intent/
- error.rs: ViError/ViErrorKind (25+ variants), implements std::error::Error
- types.rs: JWK, Cnf, Entity, Constraint (8 variants), Immediate/Autonomous
  mandate structs, Fulfillment, Layer1/Layer2/CredentialChain
- crypto.rs: base64url helpers, SD hash, JWS sign/verify, EC P-256 key
  generation/loading, disclosure creation, SD-JWT serialize/parse
- verification.rs: StrictnessMode, ChainVerificationResult,
  ConstraintCheckResult, verify_timestamps, verify_sd_hash_binding,
  verify_l3_cross_reference, verify_checkout_hash_binding, check_constraints
- issuance.rs: create_layer2_immediate, create_layer2_autonomous,
  create_layer3_payment, create_layer3_checkout

New tool: src/tools/verifiable_intent.rs
- VerifiableIntentTool implementing Tool trait (name: vi_verify)
- Operations: verify_binding, evaluate_constraints, verify_timestamps
- Gated behind verifiable_intent.enabled config flag

Wiring:
- src/lib.rs: pub mod verifiable_intent
- src/main.rs: mod verifiable_intent (binary re-declaration)
- src/config/schema.rs: VerifiableIntentConfig struct, field on Config
- src/config/mod.rs: re-exports VerifiableIntentConfig
- src/onboard/wizard.rs: default field in Config literals
- src/tools/mod.rs: conditional tool registration

Uses only existing deps: ring (ECDSA P-256), sha2, base64, serde_json,
chrono, anyhow. No new dependencies added.

Validation: cargo fmt clean, cargo clippy -D warnings clean,
cargo test --lib -- verifiable_intent passes (44 tests)

* chore(verifiable_intent): add Apache 2.0 attribution for VI spec reference

The src/verifiable_intent/ module is a Rust-native reimplementation based
on the Verifiable Intent open specification and reference implementation by
genereda (https://github.com/genereda/verifiable-intent), Apache 2.0.

- Add attribution section to src/verifiable_intent/mod.rs doc comment
- Add third-party attribution entry to NOTICE per Apache 2.0 section 4(d)

* fix(verifiable_intent): correct VI attribution URL and author

Replace hallucinated github.com/genereda/verifiable-intent with the
actual remote: github.com/agent-intent/verifiable-intent

* fix(verifiable_intent): remove unused pub re-exports to fix clippy

Remove unused re-exports of ViError, ViErrorKind, types::*,
ChainVerificationResult, and ConstraintCheckResult from the module
root. Only StrictnessMode is used externally.

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-20 17:52:55 -04:00
argenis de la rosa 2bd141aa07 docs: remove architecture diagram section from all READMEs
Remove the "How it works (short)" ASCII diagram section from
all 31 README files (English + 30 translations).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 17:45:35 -04:00
Argenis cc470601de docs(i18n): expand language hubs and add six new locales (#2934)
Expand README/docs navigation to include Korean, Tagalog, German, Arabic, Hindi, and Bengali locale entries. Add canonical locale hub and summary files for each new language under docs/i18n/.

Update i18n index/coverage metadata to reflect hub-level support and keep language discovery consistent across root docs entry points.
2026-03-20 17:40:09 -04:00
Argenis 707ee02d76 chore: bump version to 0.5.4 (#4090) 2026-03-20 16:06:52 -04:00
Argenis a47a9ee269 fix(skills): improve ClawhHub skill installer with zip crate and URL parsing (#4088)
Replace the shell-based unzip extraction with the zip crate for
cross-platform support. Use reqwest::Url for proper URL parsing,
add www.clawhub.ai and clawhub: shorthand support, fix the download
API URL, add ZIP path traversal protection, size limits, rate-limit
handling, and SKILL.toml fallback generation.

Supersedes #4043
Closes #4022
2026-03-20 15:46:52 -04:00
Argenis 8bb61fe368 fix(cron): persist delivery for api-created cron jobs (#4087)
Resolves merge conflicts from PR #4064. Uses typed DeliveryConfig in
CronAddBody and passes delivery directly to add_shell_job_with_approval
and add_agent_job instead of post-creation patching. Preserves master's
richer API fields (session_target, model, allowed_tools, delete_after_run).
2026-03-20 15:42:00 -04:00
avianion 38a8e910d0 feat(providers): add Avian as OpenAI-compatible provider (#4076)
* feat(providers): add Avian as a named provider

Add Avian (https://avian.io) as a first-class OpenAI-compatible provider
with bearer auth via AVIAN_API_KEY. Registers the provider in the factory,
credential resolver, provider list, onboard wizard, and docs.

Models: deepseek/deepseek-v3.2, moonshotai/kimi-k2.5, z-ai/glm-5,
minimax/minimax-m2.5.

* style: fix rustfmt formatting in wizard.rs curated models

Collapse short GLM-5 tuple onto single line to satisfy cargo fmt.

* fix: add none-backend test and sync localized docs

- Add memory_backend parameter to scaffold_workspace so it can
  conditionally skip MEMORY.md creation and adjust AGENTS.md guidance
  when memory.backend = "none"
- Add test scaffold_none_backend_disables_memory_guidance_and_skips_memory_md
- Sync Avian provider entry to all 6 localized providers-reference.md
  files (el, fr, ja, ru, vi, zh-CN)
- Bump providers-reference.md date to March 9, 2026

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* test(onboard): add explicit Avian assertions in wizard helper tests

Add targeted assertions for the Avian provider branches to prevent
silent regressions, as requested in CodeRabbit review feedback:

- default_model_for_provider("avian") => "deepseek/deepseek-v3.2"
- curated_models_for_provider("avian") includes all 4 catalog entries
- supports_live_model_fetch("avian") => true
- models_endpoint_for_provider("avian") => Avian API URL
- provider_env_var("avian") => "AVIAN_API_KEY"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: fix rustfmt formatting in wizard.rs scaffold_workspace

Break scaffold_workspace function signature and .await.unwrap() chains
across multiple lines to comply with rustfmt max line width.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-20 15:31:59 -04:00
Eddie's AI Agent 916ad490bd fix: remove BSD stat fallback from Dockerfiles (#3847) (#4077)
Containers always run Linux, so only GNU stat (-c%s) is needed.
The BSD fallback (stat -f%z) caused shell arithmetic errors under
podman when the fallback syntax was evaluated.

Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-20 15:31:56 -04:00
Eddie's AI Agent 2dee42d6e4 fix(daemon): add 8MB stack size for ARM64 Linux targets (#4078)
ARM64 Linux (musl and Android) targets were using the default 2MB stack,
which is insufficient for the 126 JsonSchema derives and causes silent
daemon crashes due to stack overflow. x86_64 and Windows already had 8MB
overrides.

Closes #3537

Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-20 15:28:40 -04:00
dependabot[bot] 6dbc1d7c9c chore(deps): bump the rust-all group with 2 updates (#4047)
Bumps the rust-all group with 2 updates: [opentelemetry-otlp](https://github.com/open-telemetry/opentelemetry-rust) and [extism](https://github.com/extism/extism).


Updates `opentelemetry-otlp` from 0.31.0 to 0.31.1
- [Release notes](https://github.com/open-telemetry/opentelemetry-rust/releases)
- [Changelog](https://github.com/open-telemetry/opentelemetry-rust/blob/main/docs/release_0.30.md)
- [Commits](https://github.com/open-telemetry/opentelemetry-rust/compare/v0.31.0...opentelemetry-otlp-0.31.1)

Updates `extism` from 1.13.0 to 1.20.0
- [Release notes](https://github.com/extism/extism/releases)
- [Commits](https://github.com/extism/extism/compare/v1.13.0...v1.20.0)

---
updated-dependencies:
- dependency-name: opentelemetry-otlp
  dependency-version: 0.31.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: extism
  dependency-version: 1.20.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: rust-all
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 15:16:08 -04:00
Anatolii Fesiuk 2bc2ddfbae feat(tool): add myself and list_projects actions to jira tool (#4061)
* Sync jira tool description between .rs and en.toml

Replace multi-line operational guide in en.toml with the same one-liner
already in jira_tool.rs description(), matching the pattern used by all
other tools where both sources are in sync.

* Add myself action to jira tool for credential verification

* Add tests for myself action in jira tool

* Review and fix list_projects action added to jira tool

- Fix doc comment: update action count from four to five and add missing
  myself entry
- Remove redundant statuses_url variable (was identical to url)

The list_projects action fetches all projects with their issue types,
statuses, and assignable users by combining /rest/api/3/project,
per-project /statuses, and /user/assignable/multiProjectSearch endpoints.

* Remove hardcoded project-specific statuses from shape_projects

Replace fixed known_order list (which included project-specific statuses
like 'Collecting Intel', 'Design', 'Verification') with a simple
alphabetical sort. Any Jira project can use arbitrary status names so
hardcoding an order is not applicable universally.

* Fix list_projects: bounded concurrency, error surfacing, and output shape

- Use tokio::task::JoinSet with STATUS_CONCURRENCY=5 to fetch per-project
  statuses concurrently instead of sequentially, bounding API blast radius
- Surface user fetch errors: non-2xx and JSON parse failures now bail
  instead of silently falling back to empty vec
- Surface per-project status JSON parse errors instead of swallowing them
  with unwrap_or_else
- Move users to top-level output {projects, users} so they are not
  duplicated across every project entry

* fix(tool): apply rustfmt formatting to jira_tool.rs
2026-03-20 15:11:53 -04:00
Moksh Gupta 9fadf50375 Feat/add pinggy tunnel (#4060)
* feat(tunnel): add Pinggy tunnel support with configuration options

* feat(pinggy): update Pinggy tunnel configuration to remove domain field and improve SSH command handling

* feat(pinggy): add encryption and decryption for Pinggy tunnel token in config

* feat(pinggy): enhance region configuration for Pinggy tunnel with detailed options and validation

* feat(pinggy): enhance region validation and streamline output handling in Pinggy tunnel

* fix(pinggy): resolve clippy and fmt warnings

---------

Co-authored-by: moksh gupta <moksh.gupta@linktoany.com>
2026-03-20 15:11:50 -04:00
tf4fun a047a0d9b8 feat(channel): enhance QQ channel with rich media and cron delivery (#4059)
Add full rich media send/receive support using unified [TYPE:target] markers
(aligned with Telegram). Register QQ as a cron announcement delivery channel.

- Media upload with SHA256-based caching and TTL
- Attachment download to workspace with all types supported
- Voice: prefer voice_wav_url (WAV), inject QQ ASR transcription
- File uploads include file_name for proper display in QQ client
- msg_seq generation and reply rate-limit tracking
- QQ delivery instructions in system prompt
- Register QQ in cron scheduler and tool description

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 15:11:47 -04:00
Argenis 43be8e5075 fix: preserve Slack session context when thread_replies=false (#4084)
* fix: preserve session context across Slack messages when thread_replies=false

When thread_replies=false, inbound_thread_ts() was falling back to each
message's own ts, giving every message a unique conversation key and
breaking multi-turn context. Now top-level messages get thread_ts=None
when threading is disabled, so all messages from the same user in the
same channel share one session.

Closes #4052

* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
2026-03-20 15:00:31 -04:00
Argenis ea8fe95b19 fix: normalize 5-field cron weekday numbers to standard crontab semantics (#4082)
* fix: normalize 5-field cron weekday numbers to match standard crontab

The cron crate uses 1=Sun,2=Mon,...,7=Sat while standard crontab uses
0/7=Sun,1=Mon,...,6=Sat. This caused '1-5' to mean Sun-Thu instead of
Mon-Fri. Add a normalization step when converting 5-field expressions
to 6-field so user-facing semantics match standard crontab behavior.

Closes #4049

* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
2026-03-20 15:00:28 -04:00
Argenis ce8f2133fb fix(provider): replace overall timeout with per-read timeout for Codex streams (#4081)
* fix(provider): replace overall timeout with per-read timeout for Codex streams

The 120-second overall HTTP timeout was killing SSE streams mid-flight
when GPT-5.4 reasoning responses exceeded that duration. Replace with
a 300-second per-read timeout that only fires when no data arrives,
allowing long-running streams to complete while still detecting stalled
connections.

Closes #3786

* chore(deps): bump aws-lc-rs to fix RUSTSEC-2026-0044/0048

Update aws-lc-rs 1.16.1 → 1.16.2 (aws-lc-sys 0.38.0 → 0.39.0) to
resolve security advisory for X.509 Name Constraints Bypass.
2026-03-20 14:40:27 -04:00
Argenis 19d9c6f32c Merge pull request #4075 from zeroclaw-labs/feat/testing-agent-loop
chore: bump version to 0.5.3
2026-03-20 13:34:04 -04:00
argenis de la rosa 0bff8d18fd chore: bump version to 0.5.3
Update version across all distribution manifests:
- Cargo.toml + Cargo.lock
- dist/scoop/zeroclaw.json
- dist/aur/PKGBUILD + .SRCINFO (catch up from 0.4.3)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 12:56:19 -04:00
Argenis 6f5b029033 fix(skills): support ClawHub registry URLs in skills install (#4069)
`zeroclaw skills install https://clawhub.ai/owner/skill` previously
failed because is_git_source() treated all https:// URLs as git repos.
ClawHub is a web registry, not a git host.

- Add is_clawhub_source() to detect clawhub.ai URLs
- Add clawhub_slug() to extract the skill name from the URL path
- Add install_clawhub_skill_source() to download via the ClawHub
  download API and extract the ZIP into the skills directory
- Exclude clawhub.ai URLs from git source detection
- Security audit runs on downloaded skills as with git installs

Closes #4022
2026-03-20 12:19:14 -04:00
Argenis 83ee103abb fix: add interrupt_on_new_message support for Matrix channel (#4070)
Add the missing interrupt_on_new_message field to MatrixConfig and wire
it through InterruptOnNewMessageConfig so Matrix behaves consistently
with Telegram, Slack, Discord, and Mattermost.

Closes #4058
2026-03-20 12:17:16 -04:00
Argenis d39ba69156 fix(providers): bail immediately on unrecoverable context window overflow (#4068)
When a request exceeds a model's context window and there is no
conversation history to truncate (e.g. system prompt alone is too
large), bail immediately with an actionable error message instead of
wasting all retry attempts on the same unrecoverable error.

Previously, the retry loop would attempt truncation, find nothing to
drop (only system + one user message), then fall through to the normal
retry logic which classified context window errors as retryable. This
caused 3 identical failing API calls for a single "hello" message.

The fix adds an early exit in all three chat methods (chat_with_history,
chat_with_tools, chat) when truncate_for_context returns 0 dropped
messages, matching the existing behavior in chat_with_system.

Fixes #4044
2026-03-20 12:10:24 -04:00
Argenis 22cdb5e2e2 fix(whatsapp): remove duplicate variable declaration causing unused warning (#4066)
* feat(config): add google workspace operation allowlists

* docs(superpowers): link google workspace operation inventory sources

* docs(superpowers): verify starter operation examples

* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test

- Remove the duplicated credentials_path, default_account, and audit_log
  blocks that were copy-pasted into execute() — they were idempotent but
  misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
  match the trim applied during Config::validate(), preventing a mismatch
  where a config entry with surrounding whitespace would pass validation but
  never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
  test to cover the duplicate-method validation path that was implemented but
  untested.

* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented

- HIGH: extract and validate sub_resource before the allowlist check;
  is_operation_allowed() now accepts Option<&str> for sub_resource and
  returns false (fail-closed) when allowed_operations is non-empty and
  a sub_resource is present — prevents nested gws calls such as
  `drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
  matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
  support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured

* docs(google_workspace): document sub_resource limitation and add config-reference entries

Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
  allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
  that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
  (sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
  and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1

Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
  sub-table, sub-resource limitation note, and TOML example

* fix(docs): add classroom and events to allowed_services list in all config-reference locales

* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands

All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.

Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key

Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
  dimensions including the optional sub_resource

Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes

Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
  update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
  table; update Gmail examples to correct 4-segment shapes

Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape

* fix(google_workspace): add classroom and events to service parameter schema description

* fix(google_workspace): cross-validate allowed_operations service against allowed_services

When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.

Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.

Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
  to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
  from a validation requirement to a runtime matching requirement — the validator
  does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
  when allowed_services is empty

* fix(google_workspace): cross-validate allowed_operations.service against effective service set

When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.

Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.

* fix(google_workspace): update test assertion for new error message wording

* docs(google_workspace): fix stale 3-segment gmail example in TDD plan

* fix(google_workspace): address adversarial review round 4 findings

- Error message for denied operations now includes sub_resource when
  present, so gmail/users/messages/send and gmail/users/drafts/send
  produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
  Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
  construction time in new(). Runtime comparisons now use plain equality
  instead of .trim() on every call, removing the latent defect where a
  future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
  and service/resource/method checks now both require lowercase alphanumeric
  plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
  (Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
  construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.

* fix(google_workspace): wire production helpers to close test coverage gaps

- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
  same function the arg-ordering tests exercise, so a drift in the real
  command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
  method used by execute(). Replace the brittle page_limit_without_page_all
  test (which relied on environment-specific execution failure wording)
  with four direct assertions on build_pagination_args covering all
  page_all/page_limit combinations.

* fix(whatsapp): remove duplicate variable declaration causing unused warning

Remove duplicate `let transcription_config = self.transcription.clone()`
(line 626 shadowed by identical line 628). The duplicate caused a
compiler warning during --features whatsapp-web builds.

Note: the reported "hang" at 526/528 crates is expected behavior for
release builds with lto="fat" + codegen-units=1 — the final link step
is slow but does complete.

Closes #4034

---------

Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
2026-03-20 12:07:55 -04:00
Argenis 206d19af11 fix(api): respect job_type and delivery in POST /api/cron (#4063) (#4065) 2026-03-20 12:03:46 -04:00
Nim G bbd2556861 feat(tool): google_workspace operation-level allowlist (#4010)
* feat(config): add google workspace operation allowlists

* docs(superpowers): link google workspace operation inventory sources

* docs(superpowers): verify starter operation examples

* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test

- Remove the duplicated credentials_path, default_account, and audit_log
  blocks that were copy-pasted into execute() — they were idempotent but
  misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
  match the trim applied during Config::validate(), preventing a mismatch
  where a config entry with surrounding whitespace would pass validation but
  never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
  test to cover the duplicate-method validation path that was implemented but
  untested.

* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented

- HIGH: extract and validate sub_resource before the allowlist check;
  is_operation_allowed() now accepts Option<&str> for sub_resource and
  returns false (fail-closed) when allowed_operations is non-empty and
  a sub_resource is present — prevents nested gws calls such as
  `drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
  matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
  support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured

* docs(google_workspace): document sub_resource limitation and add config-reference entries

Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
  allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
  that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
  (sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
  and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1

Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
  sub-table, sub-resource limitation note, and TOML example

* fix(docs): add classroom and events to allowed_services list in all config-reference locales

* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands

All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.

Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key

Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
  dimensions including the optional sub_resource

Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes

Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
  update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
  table; update Gmail examples to correct 4-segment shapes

Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape

* fix(google_workspace): add classroom and events to service parameter schema description

* fix(google_workspace): cross-validate allowed_operations service against allowed_services

When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.

Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.

Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
  to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
  from a validation requirement to a runtime matching requirement — the validator
  does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
  when allowed_services is empty

* fix(google_workspace): cross-validate allowed_operations.service against effective service set

When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.

Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.

* fix(google_workspace): update test assertion for new error message wording

* docs(google_workspace): fix stale 3-segment gmail example in TDD plan

* fix(google_workspace): address adversarial review round 4 findings

- Error message for denied operations now includes sub_resource when
  present, so gmail/users/messages/send and gmail/users/drafts/send
  produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
  Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
  construction time in new(). Runtime comparisons now use plain equality
  instead of .trim() on every call, removing the latent defect where a
  future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
  and service/resource/method checks now both require lowercase alphanumeric
  plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
  (Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
  construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.

* fix(google_workspace): wire production helpers to close test coverage gaps

- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
  same function the arg-ordering tests exercise, so a drift in the real
  command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
  method used by execute(). Replace the brittle page_limit_without_page_all
  test (which relied on environment-specific execution failure wording)
  with four direct assertions on build_pagination_args covering all
  page_all/page_limit combinations.

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-20 11:46:22 -04:00
ryankr 2a1b32f8bf feat(gateway): make request timeout configurable via env var (#4055)
Add `ZEROCLAW_GATEWAY_TIMEOUT_SECS` environment variable to override the
hardcoded 30-second gateway request timeout at runtime.

Agentic workloads with tool use (web search, MCP tools, sub-agent
delegation) regularly exceed 30 seconds, causing HTTP 408 timeouts at the
gateway layer — even though provider_timeout_secs allows longer LLM calls.

The default remains 30s for backward compatibility.

Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
2026-03-20 11:24:13 -04:00
Tomas Ward 505024d290 fix(provider): complete Anthropic OAuth setup-token authentication (#4053)
Setup tokens (sk-ant-oat01-*) from Claude Pro/Max subscriptions require
specific headers and a system prompt to authenticate successfully.
Without these, the API returns 400 Bad Request.

Changes to apply_auth():
- Add claude-code-20250219 and interleaved-thinking-2025-05-14 beta
  headers alongside existing oauth-2025-04-20
- Add anthropic-dangerous-direct-browser-access: true header

New apply_oauth_system_prompt() method:
- Prepends required "You are Claude Code" identity to system prompt
- Handles String, Blocks, and None system prompt variants

Changes to chat_with_system() and chat():
- Inject OAuth system prompt when using setup tokens
- Use NativeChatRequest/NativeChatResponse for proper SystemPrompt
  enum support in chat_with_system

Test updates:
- Updated apply_auth test to verify new beta headers and
  browser-access header

Tested with real OAuth token via `zeroclaw agent -m` — confirmed
working end-to-end.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-20 11:24:10 -04:00
ryankr a0c12b5a28 fix(agent): force sequential execution when tool_search is in batch (#4054)
tool_search activates deferred MCP tools into ActivatedToolSet at runtime.
When tool_search runs in parallel with the tools it activates, a race
condition occurs where tool lookups happen before activation completes,
resulting in "Unknown tool" errors.

Force sequential execution in should_execute_tools_in_parallel() whenever
tool_search is present in the tool call batch.

Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
2026-03-20 11:24:07 -04:00
243 changed files with 34735 additions and 3461 deletions
+2
View File
@@ -7,4 +7,6 @@ ignore = [
"RUSTSEC-2026-0006", # wasmtime f64.copysign segfault on x86-64
"RUSTSEC-2026-0020", # WASI guest-controlled resource exhaustion
"RUSTSEC-2026-0021", # WASI http fields panic
# instant crate unmaintained — transitive dep via nostr; no upstream fix
"RUSTSEC-2024-0384",
]
+2 -1
View File
@@ -2,7 +2,7 @@
rustflags = ["-C", "link-arg=-static"]
[target.aarch64-unknown-linux-musl]
rustflags = ["-C", "link-arg=-static"]
rustflags = ["-C", "link-arg=-static", "-C", "link-arg=-Wl,-z,stack-size=8388608"]
# Android targets (NDK toolchain)
[target.armv7-linux-androideabi]
@@ -10,3 +10,4 @@ linker = "armv7a-linux-androideabi21-clang"
[target.aarch64-linux-android]
linker = "aarch64-linux-android21-clang"
rustflags = ["-C", "link-arg=-Wl,-z,stack-size=8388608"]
+97
View File
@@ -0,0 +1,97 @@
# Mem0 Integration: Dual-Scope Recall + Per-Turn Memory
## Context
Mem0 auto-save works but the integration is missing key features from mem0 best practices: per-turn recall, multi-level scoping, and proper context injection. This causes the bot to "forget" on follow-up turns and not differentiate users.
## What's Missing (vs mem0 docs)
1. **Per-turn recall** — only first turn gets memory context, follow-ups get nothing
2. **Dual-scope** — no sender vs group distinction. All memories use single hardcoded `user_id`
3. **System prompt injection** — memory prepended to user message (pollutes session history)
4. **`agent_id` scoping** — mem0 supports agent-level patterns, not used
## Changes
### 1. `src/memory/mem0.rs` — Use session_id for multi-level scoping
Map zeroclaw's `session_id` param to mem0's `user_id`. This enables per-user and per-group memory namespaces without changing the `Memory` trait.
```rust
// Add helper:
fn effective_user_id(&self, session_id: Option<&str>) -> &str {
session_id.filter(|s| !s.is_empty()).unwrap_or(&self.user_id)
}
// In store(): use effective_user_id(session_id) as mem0 user_id
// In recall(): use effective_user_id(session_id) as mem0 user_id
// In list(): use effective_user_id(session_id) as mem0 user_id
```
### 2. `src/channels/mod.rs` ~line 2229 — Per-turn dual-scope recall
Remove `if !had_prior_history` gate. Always recall from both sender scope and group scope (for group chats).
```rust
// Detect group chat
let is_group = msg.reply_target.contains("@g.us")
|| msg.reply_target.starts_with("group:");
// Sender-scope recall (always)
let sender_context = build_memory_context(
ctx.memory.as_ref(), &msg.content, ctx.min_relevance_score,
Some(&msg.sender),
).await;
// Group-scope recall (groups only)
let group_context = if is_group {
build_memory_context(
ctx.memory.as_ref(), &msg.content, ctx.min_relevance_score,
Some(&history_key),
).await
} else {
String::new()
};
// Merge (deduplicate by checking substring overlap)
let memory_context = merge_memory_contexts(&sender_context, &group_context);
```
### 3. `src/channels/mod.rs` ~line 2244 — Inject into system prompt
Move memory context from user message to system prompt. Re-fetched each turn, doesn't pollute session.
```rust
let mut system_prompt = build_channel_system_prompt(...);
if !memory_context.is_empty() {
system_prompt.push_str(&format!("\n\n{memory_context}"));
}
let mut history = vec![ChatMessage::system(system_prompt)];
```
### 4. `src/channels/mod.rs` — Dual-scope auto-save
Find existing auto-save call. For group messages, store twice:
- `store(key, content, category, Some(&msg.sender))` — personal facts
- `store(key, content, category, Some(&history_key))` — group context
Both async, non-blocking. DMs only store to sender scope.
### 5. `src/memory/mem0.rs` — Add `agent_id` support (optional)
Pass `self.app_name` as `agent_id` param to mem0 API for agent behavior tracking.
## Files to Modify
1. `src/memory/mem0.rs` — session_id → user_id mapping
2. `src/channels/mod.rs` — per-turn recall, dual-scope, system prompt injection, dual-scope save
## Verification
1. `cargo check --features whatsapp-web,memory-mem0`
2. `cargo test --features whatsapp-web,memory-mem0`
3. Deploy to Synology
4. Test DM: "我鍾意食壽司" → next turn "我鍾意食咩" → should recall
5. Test group: Joe says "我鍾意食壽司" → someone else asks "Joe 鍾意食咩" → should recall from group scope
6. Check mem0 server logs: GET with `user_id=sender` AND `user_id=group_key`
7. Check mem0 server logs: POST with both user_ids for group messages
+41
View File
@@ -1,3 +1,44 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
# All files
[*]
indent_style = space
indent_size = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
# Rust files - match rustfmt.toml
[*.rs]
indent_size = 4
max_line_length = 100
# Markdown files
[*.md]
trim_trailing_whitespace = false
max_line_length = 80
# TOML files
[*.toml]
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_size = 2
# Python files
[*.py]
indent_size = 4
max_line_length = 100
# Shell scripts
[*.{sh,bash}]
indent_size = 2
# JSON files
[*.json]
indent_size = 2
+1 -1
View File
@@ -154,7 +154,7 @@ jobs:
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check all features
run: cargo check --all-features --locked
run: cargo check --features ci-all --locked
docs-quality:
name: Docs Quality
+17
View File
@@ -19,6 +19,7 @@ env:
jobs:
detect-version-change:
name: Detect Version Bump
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
changed: ${{ steps.check.outputs.changed }}
@@ -102,6 +103,22 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
shell: bash
env:
+18
View File
@@ -67,6 +67,24 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
if: "!inputs.dry_run"
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
if: "!inputs.dry_run"
run: sleep 15
- name: Publish (dry run)
if: inputs.dry_run
run: cargo publish --dry-run --locked --allow-dirty --no-verify
@@ -21,6 +21,7 @@ env:
jobs:
version:
name: Resolve Version
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
version: ${{ steps.ver.outputs.version }}
@@ -40,6 +41,7 @@ jobs:
release-notes:
name: Generate Release Notes
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
@@ -130,6 +132,7 @@ jobs:
web:
name: Build Web Dashboard
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
@@ -323,6 +323,21 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
Generated
+399 -19
View File
@@ -2,6 +2,14 @@
# It is not intended for manual editing.
version = 4
[[package]]
name = "aardvark-sys"
version = "0.1.0"
dependencies = [
"libloading",
"thiserror 2.0.18",
]
[[package]]
name = "accessory"
version = "2.1.0"
@@ -109,6 +117,28 @@ version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923"
[[package]]
name = "alsa"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed7572b7ba83a31e20d1b48970ee402d2e3e0537dcfe0a3ff4d6eb7508617d43"
dependencies = [
"alsa-sys",
"bitflags 2.11.0",
"cfg-if",
"libc",
]
[[package]]
name = "alsa-sys"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db8fee663d06c4e303404ef5f40488a53e062f89ba8bfed81f42325aafad1527"
dependencies = [
"libc",
"pkg-config",
]
[[package]]
name = "ambient-authority"
version = "0.0.2"
@@ -435,9 +465,9 @@ checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
[[package]]
name = "aws-lc-rs"
version = "1.16.1"
version = "1.16.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94bffc006df10ac2a68c83692d734a465f8ee6c5b384d8545a636f81d858f4bf"
checksum = "a054912289d18629dc78375ba2c3726a3afe3ff71b4edba9dedfca0e3446d1fc"
dependencies = [
"aws-lc-sys",
"zeroize",
@@ -445,9 +475,9 @@ dependencies = [
[[package]]
name = "aws-lc-sys"
version = "0.38.0"
version = "0.39.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4321e568ed89bb5a7d291a7f37997c2c0df89809d7b6d12062c81ddb54aa782e"
checksum = "1fa7e52a4c5c547c741610a2c6f123f3881e409b714cd27e6798ef020c514f0a"
dependencies = [
"cc",
"cmake",
@@ -575,6 +605,24 @@ dependencies = [
"virtue",
]
[[package]]
name = "bindgen"
version = "0.72.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "993776b509cfb49c750f11b8f07a46fa23e0a1386ffc01fb1e7d343efc387895"
dependencies = [
"bitflags 2.11.0",
"cexpr",
"clang-sys",
"itertools 0.13.0",
"proc-macro2",
"quote",
"regex",
"rustc-hash",
"shlex",
"syn 2.0.117",
]
[[package]]
name = "bip39"
version = "2.2.2"
@@ -870,6 +918,21 @@ dependencies = [
"shlex",
]
[[package]]
name = "cesu8"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d43a04d8753f35258c91f8ec639f792891f748a1edbd759cf1dcea3382ad83c"
[[package]]
name = "cexpr"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766"
dependencies = [
"nom 7.1.3",
]
[[package]]
name = "cff-parser"
version = "0.1.0"
@@ -995,6 +1058,17 @@ dependencies = [
"zeroize",
]
[[package]]
name = "clang-sys"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b023947811758c97c59bf9d1c188fd619ad4718dcaa767947df1cadb14f39f4"
dependencies = [
"glob",
"libc",
"libloading",
]
[[package]]
name = "clap"
version = "4.6.0"
@@ -1078,6 +1152,16 @@ version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d07550c9036bf2ae0c684c4297d503f838287c83c53686d05370d0e139ae570"
[[package]]
name = "combine"
version = "4.6.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba5a308b75df32fe02788e748662718f03fde005016435c444eea572398219fd"
dependencies = [
"bytes",
"memchr",
]
[[package]]
name = "compression-codecs"
version = "0.4.37"
@@ -1201,6 +1285,49 @@ dependencies = [
"libm",
]
[[package]]
name = "coreaudio-rs"
version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "321077172d79c662f64f5071a03120748d5bb652f5231570141be24cfcd2bace"
dependencies = [
"bitflags 1.3.2",
"core-foundation-sys",
"coreaudio-sys",
]
[[package]]
name = "coreaudio-sys"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ceec7a6067e62d6f931a2baf6f3a751f4a892595bcec1461a3c94ef9949864b6"
dependencies = [
"bindgen",
]
[[package]]
name = "cpal"
version = "0.15.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "873dab07c8f743075e57f524c583985fbaf745602acbe916a01539364369a779"
dependencies = [
"alsa",
"core-foundation-sys",
"coreaudio-rs",
"dasp_sample",
"jni",
"js-sys",
"libc",
"mach2 0.4.3",
"ndk",
"ndk-context",
"oboe",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
"windows",
]
[[package]]
name = "cpp_demangle"
version = "0.4.5"
@@ -1580,6 +1707,12 @@ dependencies = [
"parking_lot_core",
]
[[package]]
name = "dasp_sample"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c87e182de0887fd5361989c677c4e8f5000cd9491d6d563161a8f3a5519fc7f"
[[package]]
name = "data-encoding"
version = "2.10.0"
@@ -2098,9 +2231,9 @@ dependencies = [
[[package]]
name = "extism"
version = "1.13.0"
version = "1.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "31848a0c3cc19f6946767fc0b54bbf6b07ee0f2e53302ede5abda6a5ae02dbb6"
checksum = "491d31da92442abcbbbf6c1e3074abb308925a2384a615c79ac76420e4f790fc"
dependencies = [
"anyhow",
"cbindgen",
@@ -2124,9 +2257,9 @@ dependencies = [
[[package]]
name = "extism-convert"
version = "1.13.0"
version = "1.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4f6612b4e92559eeb4c2dac88a53ee8b4729bea64025befcdeb2b3677e62fc1d"
checksum = "c6a2f8c12ab80a3f810edef0d96fe7a5ffcc9ce59a534e81f1b6bd8e977c6772"
dependencies = [
"anyhow",
"base64",
@@ -2140,9 +2273,9 @@ dependencies = [
[[package]]
name = "extism-convert-macros"
version = "1.13.0"
version = "1.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "525831f1f15079a7c43514905579aac10f90fee46bc6353b683ed632029dd945"
checksum = "317ea3a0ba61991baf81ed51e7a59d840952e9aacd625b4d3bef39093e7c86e7"
dependencies = [
"manyhow",
"proc-macro-crate",
@@ -2153,9 +2286,9 @@ dependencies = [
[[package]]
name = "extism-manifest"
version = "1.13.0"
version = "1.20.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e60e36345a96ad0d74adfca64dc22d93eb4979ab15a6c130cded5e0585f31b10"
checksum = "9af75c1bfec0592bd51be27de1506f21fb0991d7caf81e691d4298d5dc254da5"
dependencies = [
"base64",
"serde",
@@ -2936,7 +3069,7 @@ dependencies = [
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
"windows-core 0.62.2",
]
[[package]]
@@ -3387,6 +3520,28 @@ dependencies = [
"serde",
]
[[package]]
name = "jni"
version = "0.21.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a87aa2bb7d2af34197c04845522473242e1aa17c12f4935d5856491a7fb8c97"
dependencies = [
"cesu8",
"cfg-if",
"combine",
"jni-sys",
"log",
"thiserror 1.0.69",
"walkdir",
"windows-sys 0.45.0",
]
[[package]]
name = "jni-sys"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130"
[[package]]
name = "jobserver"
version = "0.1.34"
@@ -3510,6 +3665,16 @@ version = "0.2.183"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b646652bf6661599e1da8901b3b9522896f01e736bad5f723fe7a3a27f899d"
[[package]]
name = "libloading"
version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7c4b02199fee7c5d21a5ae7d8cfa79a6ef5bb2fc834d6e9058e89c825efdc55"
dependencies = [
"cfg-if",
"windows-link",
]
[[package]]
name = "libm"
version = "0.2.16"
@@ -4224,6 +4389,35 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11ec1bc47d34ae756616f387c11fd0595f86f2cc7e6473bde9e3ded30cb902a1"
[[package]]
name = "ndk"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2076a31b7010b17a38c01907c45b945e8f11495ee4dd588309718901b1f7a5b7"
dependencies = [
"bitflags 2.11.0",
"jni-sys",
"log",
"ndk-sys",
"num_enum",
"thiserror 1.0.69",
]
[[package]]
name = "ndk-context"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "27b02d87554356db9e9a873add8782d4ea6e3e58ea071a9adb9a2e8ddb884a8b"
[[package]]
name = "ndk-sys"
version = "0.5.0+25.2.9519653"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c196769dd60fd4f363e11d948139556a344e79d451aeb2fa2fd040738ef7691"
dependencies = [
"jni-sys",
]
[[package]]
name = "negentropy"
version = "0.5.0"
@@ -4403,6 +4597,17 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050"
[[package]]
name = "num-derive"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed3955f1a9c7c0c15e092f9c887db08b1fc683305fdf6eb6684f22555355e202"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "num-traits"
version = "0.2.19"
@@ -4422,6 +4627,28 @@ dependencies = [
"libc",
]
[[package]]
name = "num_enum"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d0bca838442ec211fa11de3a8b0e0e8f3a4522575b5c4c06ed722e005036f26"
dependencies = [
"num_enum_derive",
"rustversion",
]
[[package]]
name = "num_enum_derive"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "680998035259dcfcafe653688bf2aa6d3e2dc05e98be6ab46afb089dc84f1df8"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "nusb"
version = "0.2.3"
@@ -4501,6 +4728,29 @@ dependencies = [
"ruzstd",
]
[[package]]
name = "oboe"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8b61bebd49e5d43f5f8cc7ee2891c16e0f41ec7954d36bcb6c14c5e0de867fb"
dependencies = [
"jni",
"ndk",
"ndk-context",
"num-derive",
"num-traits",
"oboe-sys",
]
[[package]]
name = "oboe-sys"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c8bb09a4a2b1d668170cfe0a7d5bc103f8999fb316c98099b6a9939c9f2e79d"
dependencies = [
"cc",
]
[[package]]
name = "once_cell"
version = "1.21.4"
@@ -4559,9 +4809,9 @@ dependencies = [
[[package]]
name = "opentelemetry-otlp"
version = "0.31.0"
version = "0.31.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a2366db2dca4d2ad033cad11e6ee42844fd727007af5ad04a1730f4cb8163bf"
checksum = "1f69cd6acbb9af919df949cd1ec9e5e7fdc2ef15d234b6b795aaa525cc02f71f"
dependencies = [
"http 1.4.0",
"opentelemetry",
@@ -6032,9 +6282,9 @@ dependencies = [
[[package]]
name = "rustls-webpki"
version = "0.103.9"
version = "0.103.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53"
checksum = "df33b2b81ac578cabaf06b89b0631153a3f416b0a886e8a7a1707fb51abbd1ef"
dependencies = [
"aws-lc-rs",
"ring",
@@ -7420,6 +7670,12 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "typed-path"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e28f89b80c87b8fb0cf04ab448d5dd0dd0ade2f8891bae878de66a75a28600e"
[[package]]
name = "typenum"
version = "1.19.0"
@@ -8667,6 +8923,26 @@ dependencies = [
"wasmtime-internal-math",
]
[[package]]
name = "windows"
version = "0.54.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9252e5725dbed82865af151df558e754e4a3c2c30818359eb17465f1346a1b49"
dependencies = [
"windows-core 0.54.0",
"windows-targets 0.52.6",
]
[[package]]
name = "windows-core"
version = "0.54.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "12661b9c89351d684a50a8a643ce5f608e20243b9fb84687800163429f161d65"
dependencies = [
"windows-result 0.1.2",
"windows-targets 0.52.6",
]
[[package]]
name = "windows-core"
version = "0.62.2"
@@ -8676,7 +8952,7 @@ dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-result 0.4.1",
"windows-strings",
]
@@ -8708,6 +8984,15 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-result"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e383302e8ec8515204254685643de10811af0ed97ea37210dc26fb0032647f8"
dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "windows-result"
version = "0.4.1"
@@ -8726,6 +9011,15 @@ dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.45.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75283be5efb2831d37ea142365f009c02ec203cd29a3ebecbc093d52315b66d0"
dependencies = [
"windows-targets 0.42.2",
]
[[package]]
name = "windows-sys"
version = "0.52.0"
@@ -8762,6 +9056,21 @@ dependencies = [
"windows-link",
]
[[package]]
name = "windows-targets"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e5180c00cd44c9b1c88adb3693291f1cd93605ded80c250a75d472756b4d071"
dependencies = [
"windows_aarch64_gnullvm 0.42.2",
"windows_aarch64_msvc 0.42.2",
"windows_i686_gnu 0.42.2",
"windows_i686_msvc 0.42.2",
"windows_x86_64_gnu 0.42.2",
"windows_x86_64_gnullvm 0.42.2",
"windows_x86_64_msvc 0.42.2",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
@@ -8795,6 +9104,12 @@ dependencies = [
"windows_x86_64_msvc 0.53.1",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
@@ -8807,6 +9122,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a9d8416fa8b42f5c947f8482c43e7d89e73a173cead56d044f6a56104a6d1b53"
[[package]]
name = "windows_aarch64_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
@@ -8819,6 +9140,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9d782e804c2f632e395708e99a94275910eb9100b2114651e04744e9b125006"
[[package]]
name = "windows_i686_gnu"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
@@ -8843,6 +9170,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa7359d10048f68ab8b09fa71c3daccfb0e9b559aed648a8f95469c27057180c"
[[package]]
name = "windows_i686_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
@@ -8855,6 +9188,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e7ac75179f18232fe9c285163565a57ef8d3c89254a30685b57d83a38d326c2"
[[package]]
name = "windows_x86_64_gnu"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
@@ -8867,6 +9206,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c3842cdd74a865a8066ab39c8a7a473c0778a3f29370b5fd6b4b9aa7df4a499"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
@@ -8879,6 +9224,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ffa179e2d07eee8ad8f57493436566c7cc30ac536a3379fdf008f47f6bb7ae1"
[[package]]
name = "windows_x86_64_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
@@ -9179,8 +9530,9 @@ dependencies = [
[[package]]
name = "zeroclawlabs"
version = "0.5.2"
version = "0.5.6"
dependencies = [
"aardvark-sys",
"anyhow",
"async-imap",
"async-trait",
@@ -9192,6 +9544,7 @@ dependencies = [
"clap",
"clap_complete",
"console",
"cpal",
"criterion",
"cron",
"dialoguer",
@@ -9268,6 +9621,7 @@ dependencies = [
"webpki-roots 1.0.6",
"which",
"wiremock",
"zip",
]
[[package]]
@@ -9386,6 +9740,20 @@ dependencies = [
"syn 2.0.117",
]
[[package]]
name = "zip"
version = "8.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a243cfad17427fc077f529da5a95abe4e94fd2bfdb601611870a6557cc67657"
dependencies = [
"crc32fast",
"flate2",
"indexmap",
"memchr",
"typed-path",
"zopfli",
]
[[package]]
name = "zlib-rs"
version = "0.6.3"
@@ -9398,6 +9766,18 @@ version = "1.0.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa"
[[package]]
name = "zopfli"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f05cd8797d63865425ff89b5c4a48804f35ba0ce8d125800027ad6017d2b5249"
dependencies = [
"bumpalo",
"crc32fast",
"log",
"simd-adler32",
]
[[package]]
name = "zstd"
version = "0.13.3"
+37 -3
View File
@@ -1,10 +1,10 @@
[workspace]
members = [".", "crates/robot-kit"]
members = [".", "crates/robot-kit", "crates/aardvark-sys"]
resolver = "2"
[package]
name = "zeroclawlabs"
version = "0.5.2"
version = "0.5.6"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
@@ -89,10 +89,16 @@ indicatif = "0.18"
# Temp files (update pipeline rollback)
tempfile = "3.26"
# Zip extraction for ClawhHub / OpenClaw registry installers
zip = { version = "8.1", default-features = false, features = ["deflate"] }
# Error handling
anyhow = "1.0"
thiserror = "2.0"
# Aardvark I2C/SPI/GPIO USB adapter (Total Phase) — stub when SDK absent
aardvark-sys = { path = "crates/aardvark-sys", version = "0.1.0" }
# UUID generation
uuid = { version = "1.22", default-features = false, features = ["v4", "std"] }
@@ -191,7 +197,10 @@ probe-rs = { version = "0.31", optional = true }
pdf-extract = { version = "0.10", optional = true }
# WASM plugin runtime (extism)
extism = { version = "1.9", optional = true }
extism = { version = "1.20", optional = true }
# Cross-platform audio capture for voice wake word detection (optional, enable with --features voice-wake)
cpal = { version = "0.15", optional = true }
# Terminal QR rendering for WhatsApp Web pairing flow.
qrcode = { version = "0.14", optional = true }
@@ -222,6 +231,8 @@ channel-matrix = ["dep:matrix-sdk"]
channel-lark = ["dep:prost"]
channel-feishu = ["channel-lark"] # Alias for Feishu users (Lark and Feishu are the same platform)
memory-postgres = ["dep:postgres"]
# memory-mem0 = Mem0 (OpenMemory) memory backend via REST API
memory-mem0 = []
observability-prometheus = ["dep:prometheus"]
observability-otel = ["dep:opentelemetry", "dep:opentelemetry_sdk", "dep:opentelemetry-otlp"]
peripheral-rpi = ["rppal"]
@@ -244,8 +255,31 @@ rag-pdf = ["dep:pdf-extract"]
skill-creation = []
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
# voice-wake = Voice wake word detection via microphone (cpal)
voice-wake = ["dep:cpal"]
# WASM plugin system (extism-based)
plugins-wasm = ["dep:extism"]
# Meta-feature for CI: all features except those requiring system C libraries
# not available on standard CI runners (e.g., voice-wake needs libasound2-dev).
ci-all = [
"channel-nostr",
"hardware",
"channel-matrix",
"channel-lark",
"memory-postgres",
"memory-mem0",
"observability-prometheus",
"observability-otel",
"peripheral-rpi",
"browser-native",
"sandbox-landlock",
"sandbox-bubblewrap",
"probe",
"rag-pdf",
"skill-creation",
"whatsapp-web",
"plugins-wasm",
]
[profile.release]
opt-level = "z" # Optimize for size
+6 -4
View File
@@ -23,9 +23,11 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Include every workspace member: Cargo.lock is generated for the full workspace.
# Previously we used sed to drop `crates/robot-kit`, which made the manifest disagree
# with the lockfile and caused `cargo --locked` to fail (Cargo refused to rewrite the lock).
COPY crates/robot-kit/ crates/robot-kit/
COPY crates/aardvark-sys/ crates/aardvark-sys/
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
@@ -60,7 +62,7 @@ RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/regist
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
RUN size=$(stat -c%s /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
# Prepare runtime directory structure and default config inline (no extra stage)
+5 -4
View File
@@ -38,9 +38,10 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
# Remove robot-kit from workspace members — it is excluded by .dockerignore
# and is not needed for the Docker build (hardware-only crate).
RUN sed -i 's/members = \[".", "crates\/robot-kit"\]/members = ["."]/' Cargo.toml
# Include every workspace member: Cargo.lock is generated for the full workspace.
# Previously we used sed to drop `crates/robot-kit`, which made the manifest disagree
# with the lockfile and caused `cargo --locked` to fail (Cargo refused to rewrite the lock).
COPY crates/robot-kit/ crates/robot-kit/
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
@@ -71,7 +72,7 @@ RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/regist
fi && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
RUN size=$(stat -c%s /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
# Prepare runtime directory structure and default config inline (no extra stage)
+15
View File
@@ -41,3 +41,18 @@ This project uses third-party libraries and components,
each licensed under their respective terms.
See Cargo.lock for a complete dependency list.
Verifiable Intent Specification
================================
The src/verifiable_intent/ module is a Rust-native reimplementation based on
the Verifiable Intent open specification and reference implementation:
Project: Verifiable Intent (VI)
Author: agent-intent
Source: https://github.com/agent-intent/verifiable-intent
License: Apache License, Version 2.0
This implementation follows the VI specification design (SD-JWT layered
credentials, constraint model, three-layer chain). No source code was copied
from the reference implementation.
-41
View File
@@ -324,47 +324,6 @@ ls -lh target/release/zeroclaw
- CI/CD: تجريبي (تلقائي عند الدفع) → مستقر (إرسال يدوي) → Docker، crates.io، Scoop، AUR، Homebrew، تغريدة.
- ملفات ثنائية مُعدة مسبقًا لـ Linux (x86_64، aarch64، armv7)، macOS (x86_64، aarch64)، Windows (x86_64).
## كيف يعمل (باختصار)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## التكوين
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 ওয়েব ড্যাশবোর্
- CI/CD: বেটা (পুশে অটো) → স্টেবল (ম্যানুয়াল ডিসপ্যাচ) → Docker, crates.io, Scoop, AUR, Homebrew, টুইট।
- Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64) এর জন্য প্রি-বিল্ট বাইনারি।
## এটি কিভাবে কাজ করে (সংক্ষিপ্ত)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## কনফিগারেশন
-41
View File
@@ -324,47 +324,6 @@ Webový panel React 19 + Vite 6 + Tailwind CSS 4 servírovaný přímo z Gateway
- CI/CD: beta (auto na push) → stable (ruční dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Předpřipravené binárky pro Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Jak to funguje (krátce)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfigurace
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 web-dashboard serveret direkte fra Gateway'en
- CI/CD: beta (auto on push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Forhaandsbyggede binaerer til Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Saadan virker det (kort)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfiguration
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 Web-Dashboard, direkt vom Gateway bereitgeste
- CI/CD: beta (automatisch bei Push) → stable (manueller Dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, Tweet.
- Vorgefertigte Binaries für Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Wie es funktioniert (kurz)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (Steuerungsebene) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web-Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Ratenbegrenzung │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfiguration
-41
View File
@@ -324,47 +324,6 @@ ls -lh target/release/zeroclaw
- CI/CD: beta (auto on push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Προκατασκευασμένα δυαδικά για Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Πώς λειτουργεί (σύντομα)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Ρύθμιση παραμέτρων
-41
View File
@@ -324,47 +324,6 @@ Panel web React 19 + Vite 6 + Tailwind CSS 4 servido directamente desde el Gatew
- CI/CD: beta (automático al hacer push) → stable (dispatch manual) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binarios preconstruidos para Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Cómo funciona (resumen)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (plano de control) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Panel Web (React 19) │
│ REST API + WebSocket + SSE │
│ Emparejamiento + Limitación │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuración
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 web-hallintapaneeli, jota tarjoillaan suoraan
- CI/CD: beta (auto on push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Valmiit binaarit Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Miten se toimii (lyhyesti)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Maaritykset
-41
View File
@@ -324,47 +324,6 @@ Tableau de bord web React 19 + Vite 6 + Tailwind CSS 4 servi directement depuis
- CI/CD : beta (automatique au push) → stable (dispatch manuel) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binaires précompilés pour Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Comment ça fonctionne (résumé)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (plan de contrôle) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Tableau de bord (React 19) │
│ REST API + WebSocket + SSE │
│ Appairage + Limitation │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuration
-41
View File
@@ -324,47 +324,6 @@ ls -lh target/release/zeroclaw
- CI/CD: בטא (אוטומטי בדחיפה) → יציב (שליחה ידנית) → Docker, crates.io, Scoop, AUR, Homebrew, ציוץ.
- בינאריים מוכנים מראש ל-Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## איך זה עובד (בקצרה)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## הגדרות
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 वेब डैशबोर्ड सीध
- CI/CD: बीटा (पुश पर ऑटो) → स्टेबल (मैनुअल डिस्पैच) → Docker, crates.io, Scoop, AUR, Homebrew, ट्वीट।
- Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64) के लिए प्री-बिल्ट बाइनरी।
## यह कैसे काम करता है (संक्षिप्त)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## कॉन्फ़िगरेशन
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 webes vezerlopult, amelyet kozvetlenul a Gate
- CI/CD: beta (auto on push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Elore elkeszitett binarisok Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64) rendszerekhez.
## Hogyan mukodik (roviden)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfiguracio
-41
View File
@@ -324,47 +324,6 @@ Dasbor web React 19 + Vite 6 + Tailwind CSS 4 yang disajikan langsung dari Gatew
- CI/CD: beta (otomatis saat push) → stable (dispatch manual) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Biner pre-built untuk Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Cara kerjanya (singkat)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfigurasi
-41
View File
@@ -324,47 +324,6 @@ Dashboard web React 19 + Vite 6 + Tailwind CSS 4 servita direttamente dal Gatewa
- CI/CD: beta (automatico al push) → stable (dispatch manuale) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binari precompilati per Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Come funziona (sintesi)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (piano di controllo) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Dashboard Web (React 19) │
│ REST API + WebSocket + SSE │
│ Accoppiamento + Limitazione │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configurazione
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 ウェブダッシュボード、Gatewayか
- CI/CDbeta(プッシュ時自動)→ stable(手動ディスパッチ)→ Docker、crates.io、Scoop、AUR、Homebrew、tweet。
- プリビルドバイナリ:Linuxx86_64、aarch64、armv7)、macOSx86_64、aarch64)、Windowsx86_64)。
## 仕組み(概要)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## 設定
-41
View File
@@ -324,47 +324,6 @@ Gateway에서 직접 제공하는 React 19 + Vite 6 + Tailwind CSS 4 웹 대시
- CI/CD: beta (push 시 자동) → stable (수동 디스패치) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64)용 사전 빌드 바이너리.
## 작동 방식 (요약)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## 구성
+1 -42
View File
@@ -300,7 +300,7 @@ React 19 + Vite 6 + Tailwind CSS 4 web dashboard served directly from the Gatewa
- **Core:** shell, file read/write/edit, git operations, glob search, content search
- **Web:** browser control, web fetch, web search, screenshot, image info, PDF read
- **Integrations:** Jira, Notion, Google Workspace, Microsoft 365, LinkedIn, Composio, Pushover
- **Integrations:** Jira, Notion, Google Workspace, Microsoft 365, LinkedIn, Composio, Pushover, Weather (wttr.in)
- **MCP:** Model Context Protocol tool wrapper + deferred tool sets
- **Scheduling:** cron add/remove/update/run, schedule tool
- **Memory:** recall, store, forget, knowledge, project intel
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 web dashboard served directly from the Gatewa
- CI/CD: beta (auto on push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Pre-built binaries for Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## How it works (short)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuration
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 nettbasert dashbord servert direkte fra Gatew
- CI/CD: beta (auto pa push) -> stabil (manuell utsendelse) -> Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Forhandsbygde binarfiler for Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Slik fungerer det (kort)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (kontrollplan) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Nettbasert dashbord (React 19)│
│ REST API + WebSocket + SSE │
│ Paring + Hastighetsbegrensning│
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Sloyfe │ │Planleg.│ │ Sverm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Leveran.│ │Verktoy │ │ Minne │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Sikker- │ │Periferiutst│
│ het │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfigurasjon
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 webdashboard geserveerd direct vanuit de Gate
- CI/CD: beta (auto bij push) → stable (handmatige dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Voorgebouwde binaries voor Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Hoe het werkt (kort)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuratie
-41
View File
@@ -324,47 +324,6 @@ Panel webowy React 19 + Vite 6 + Tailwind CSS 4 serwowany bezpośrednio z Gatewa
- CI/CD: beta (auto na push) → stable (ręczny dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Gotowe pliki binarne dla Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Jak to działa (w skrócie)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfiguracja
-41
View File
@@ -324,47 +324,6 @@ Painel web React 19 + Vite 6 + Tailwind CSS 4 servido diretamente pelo Gateway:
- CI/CD: beta (automático no push) → stable (dispatch manual) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binários pré-construídos para Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Como funciona (resumo)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (plano de controle) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Painel Web (React 19) │
│ REST API + WebSocket + SSE │
│ Pareamento + Limitação │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuração
-41
View File
@@ -324,47 +324,6 @@ Panou web React 19 + Vite 6 + Tailwind CSS 4 servit direct din Gateway:
- CI/CD: beta (automat la push) → stable (dispatch manual) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binare pre-construite pentru Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Cum funcționează (pe scurt)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configurare
-41
View File
@@ -324,47 +324,6 @@ ls -lh target/release/zeroclaw
- CI/CD: бета (авто при push) → стабильный (ручной запуск) → Docker, crates.io, Scoop, AUR, Homebrew, твит.
- Предсобранные бинарные файлы для Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Как это работает (кратко)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Конфигурация
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 webbpanel serverad direkt från Gateway:
- CI/CD: beta (automatiskt vid push) → stable (manuell dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Förbyggda binärer för Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Hur det fungerar (kort)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (kontrollplan) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Webbpanel (React 19) │
│ REST API + WebSocket + SSE │
│ Parkoppling + Hastighetsbegränsning │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Konfiguration
-41
View File
@@ -324,47 +324,6 @@ Feature-gated: Matrix (`channel-matrix`), Lark (`channel-lark`), Nostr (`channel
- CI/CD: beta (อัตโนมัติเมื่อ push) → stable (dispatch แบบ manual) → Docker, crates.io, Scoop, AUR, Homebrew, tweet
- ไบนารี pre-built สำหรับ Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64)
## วิธีการทำงาน (สั้น)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## การกำหนดค่า
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 web dashboard na direktang inihahatid mula sa
- CI/CD: beta (auto sa push) → stable (manual dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Pre-built binaries para sa Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Paano gumagana (maikli)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Configuration
-41
View File
@@ -324,47 +324,6 @@ Gateway'den doğrudan sunulan React 19 + Vite 6 + Tailwind CSS 4 web paneli:
- CI/CD: beta (push'ta otomatik) → stable (manuel dispatch) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64) için önceden derlenmiş ikili dosyalar.
## Nasıl çalışır (kısaca)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Yapılandırma
-41
View File
@@ -324,47 +324,6 @@ ls -lh target/release/zeroclaw
- CI/CD: beta (автоматично при push) → stable (ручний запуск) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Попередньо зібрані бінарні файли для Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Як це працює (коротко)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Конфігурація
-41
View File
@@ -324,47 +324,6 @@ Gateway سے براہ راست فراہم کردہ React 19 + Vite 6 + Tailwind
- CI/CD: beta (push پر خودکار) → stable (دستی dispatch) → Docker، crates.io، Scoop، AUR، Homebrew، tweet۔
- Linux (x86_64، aarch64، armv7)، macOS (x86_64، aarch64)، Windows (x86_64) کے لیے پری بلٹ بائنریز۔
## یہ کیسے کام کرتا ہے (مختصر)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## کنفیگریشن
-41
View File
@@ -324,47 +324,6 @@ Bảng điều khiển web React 19 + Vite 6 + Tailwind CSS 4 được phục v
- CI/CD: beta (tự động khi push) → stable (dispatch thủ công) → Docker, crates.io, Scoop, AUR, Homebrew, tweet.
- Binary dựng sẵn cho Linux (x86_64, aarch64, armv7), macOS (x86_64, aarch64), Windows (x86_64).
## Cách hoạt động (tóm tắt)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## Cấu hình
-41
View File
@@ -324,47 +324,6 @@ React 19 + Vite 6 + Tailwind CSS 4 网页仪表板直接从 Gateway 提供:
- CI/CDbeta(推送时自动)→ stable(手动触发)→ Docker、crates.io、Scoop、AUR、Homebrew、tweet。
- 预构建二进制文件支持 Linuxx86_64、aarch64、armv7)、macOSx86_64、aarch64)、Windowsx86_64)。
## 工作原理(简述)
```
WhatsApp / Telegram / Slack / Discord / Signal / iMessage / Matrix / IRC / Email
Bluesky / Nostr / Mattermost / DingTalk / Lark / QQ / Reddit / MQTT / WebSocket
┌───────────────────────────────┐
│ Gateway │
│ (control plane) │
│ http://127.0.0.1:42617 │
├───────────────────────────────┤
│ Web Dashboard (React 19) │
│ REST API + WebSocket + SSE │
│ Pairing + Rate Limiting │
└──────────────┬────────────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Agent │ │ Cron │ │ Hands │
│ Loop │ │Scheduler│ │ Swarm │
└───┬────┘ └───┬────┘ └───┬────┘
│ │ │
└──────────┼──────────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│Provider│ │ Tools │ │ Memory │
│ (LLM) │ │ (70+) │ │(md/sql)│
└────────┘ └────────┘ └────────┘
│ │
▼ ▼
┌────────┐ ┌────────────┐
│Security│ │ Peripherals│
│ Policy │ │(ESP32/STM32)│
└────────┘ └────────────┘
```
## 配置
+1 -1
View File
@@ -263,7 +263,7 @@ fn bench_memory_operations(c: &mut Criterion) {
c.bench_function("memory_recall_top10", |b| {
b.iter(|| {
rt.block_on(async {
mem.recall(black_box("zeroclaw agent"), 10, None)
mem.recall(black_box("zeroclaw agent"), 10, None, None, None)
.await
.unwrap()
})
+25
View File
@@ -0,0 +1,25 @@
[package]
name = "aardvark-sys"
version = "0.1.0"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
description = "Low-level bindings for the Total Phase Aardvark I2C/SPI/GPIO USB adapter"
repository = "https://github.com/zeroclaw-labs/zeroclaw"
# NOTE: This crate is the ONLY place in ZeroClaw where unsafe code is permitted.
# The rest of the workspace remains #![forbid(unsafe_code)].
#
# Stub implementation: the Total Phase SDK (aardvark.h + aardvark.so) is NOT
# yet committed. All AardvarkHandle methods return Err(AardvarkError::NotFound)
# at runtime. No unsafe code is needed for the stub.
#
# To enable real hardware (once SDK files are in vendor/):
# 1. Add `bindgen = "0.69"` to [build-dependencies]
# 2. Add `libc = "0.2"` to [dependencies]
# 3. Uncomment the build.rs bindgen call
# 4. Replace stub method bodies with FFI calls via mod bindings
[dependencies]
libloading = "0.8"
thiserror = "2.0"
+27
View File
@@ -0,0 +1,27 @@
//! Build script for aardvark-sys.
//!
//! # SDK present (real hardware)
//! When the Total Phase SDK files are in `vendor/`:
//! - Sets linker search path for aardvark.so
//! - Generates src/bindings.rs via bindgen
//!
//! # SDK absent (stub)
//! Does nothing. All AardvarkHandle methods return errors at runtime.
fn main() {
// Stub: SDK not yet in vendor/
// Uncomment and fill in when aardvark.h + aardvark.so are available:
//
// println!("cargo:rustc-link-search=native=crates/aardvark-sys/vendor");
// println!("cargo:rustc-link-lib=dylib=aardvark");
// println!("cargo:rerun-if-changed=vendor/aardvark.h");
//
// let bindings = bindgen::Builder::default()
// .header("vendor/aardvark.h")
// .parse_callbacks(Box::new(bindgen::CargoCallbacks::new()))
// .generate()
// .expect("Unable to generate aardvark bindings");
// bindings
// .write_to_file("src/bindings.rs")
// .expect("Could not write bindings");
}
+475
View File
@@ -0,0 +1,475 @@
//! Bindings for the Total Phase Aardvark I2C/SPI/GPIO USB adapter.
//!
//! Uses [`libloading`] to load `aardvark.so` at runtime — the same pattern
//! the official Total Phase C stub (`aardvark.c`) uses internally.
//!
//! # Library search order
//!
//! 1. `ZEROCLAW_AARDVARK_LIB` environment variable (full path to `aardvark.so`)
//! 2. `<workspace>/crates/aardvark-sys/vendor/aardvark.so` (development default)
//! 3. `./aardvark.so` (next to the binary, for deployment)
//!
//! If none resolve, every method returns
//! [`Err(AardvarkError::LibraryNotFound)`](AardvarkError::LibraryNotFound).
//!
//! # Safety
//!
//! This crate is the **only** place in ZeroClaw where `unsafe` is permitted.
//! All `unsafe` is confined to `extern "C"` call sites inside this file.
//! The public API is fully safe Rust.
use std::path::PathBuf;
use std::sync::OnceLock;
use libloading::{Library, Symbol};
use thiserror::Error;
// ── Constants from aardvark.h ─────────────────────────────────────────────
/// Bit set on a port returned by `aa_find_devices` when that port is in use.
const AA_PORT_NOT_FREE: u16 = 0x8000;
/// Configure adapter for I2C + GPIO (I2C master mode, SPI disabled).
const AA_CONFIG_GPIO_I2C: i32 = 0x02;
/// Configure adapter for SPI + GPIO (SPI master mode, I2C disabled).
const AA_CONFIG_SPI_GPIO: i32 = 0x01;
/// No I2C flags (standard 7-bit addressing, normal stop condition).
const AA_I2C_NO_FLAGS: i32 = 0x00;
/// Enable both onboard I2C pullup resistors (hardware v2+ only).
const AA_I2C_PULLUP_BOTH: u8 = 0x03;
// ── Library loading ───────────────────────────────────────────────────────
static AARDVARK_LIB: OnceLock<Option<Library>> = OnceLock::new();
fn lib() -> Option<&'static Library> {
AARDVARK_LIB
.get_or_init(|| {
let candidates: Vec<PathBuf> = vec![
// 1. Explicit env-var override (full path)
std::env::var("ZEROCLAW_AARDVARK_LIB")
.ok()
.map(PathBuf::from)
.unwrap_or_default(),
// 2. Vendor directory shipped with this crate (dev default)
{
let mut p = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
p.push("vendor/aardvark.so");
p
},
// 3. Next to the running binary (deployment)
std::env::current_exe()
.ok()
.and_then(|e| e.parent().map(|d| d.join("aardvark.so")))
.unwrap_or_default(),
// 4. Current working directory
PathBuf::from("aardvark.so"),
];
let mut tried_any = false;
for path in &candidates {
if path.as_os_str().is_empty() {
continue;
}
tried_any = true;
match unsafe { Library::new(path) } {
Ok(lib) => {
// Verify the .so exports aa_c_version (Total Phase version gate).
// The .so exports c_aa_* symbols (not aa_*); aa_c_version is the
// one non-prefixed symbol used to confirm library identity.
let version_ok = unsafe {
lib.get::<unsafe extern "C" fn() -> u32>(b"aa_c_version\0").is_ok()
};
if !version_ok {
eprintln!(
"[aardvark-sys] {} loaded but aa_c_version not found — \
not a valid Aardvark library, skipping",
path.display()
);
continue;
}
eprintln!("[aardvark-sys] loaded library from {}", path.display());
return Some(lib);
}
Err(e) => {
let msg = e.to_string();
// Surface architecture mismatch explicitly — the most common
// failure on Apple Silicon machines with an x86_64 SDK.
if msg.contains("incompatible architecture") || msg.contains("mach-o file") {
eprintln!(
"[aardvark-sys] ARCHITECTURE MISMATCH loading {}: {}\n\
[aardvark-sys] The vendored aardvark.so is x86_64 but this \
binary is {}.\n\
[aardvark-sys] Download the arm64 SDK from https://www.totalphase.com/downloads/ \
or build with --target x86_64-apple-darwin.",
path.display(),
msg,
std::env::consts::ARCH,
);
} else {
eprintln!(
"[aardvark-sys] could not load {}: {}",
path.display(),
msg
);
}
}
}
}
if !tried_any {
eprintln!("[aardvark-sys] no library candidates found; set ZEROCLAW_AARDVARK_LIB or place aardvark.so next to the binary");
}
None
})
.as_ref()
}
/// Errors returned by Aardvark hardware operations.
#[derive(Debug, Error)]
pub enum AardvarkError {
/// No Aardvark adapter found — adapter not plugged in.
#[error("Aardvark adapter not found — is it plugged in?")]
NotFound,
/// `aa_open` returned a non-positive handle.
#[error("Aardvark open failed (code {0})")]
OpenFailed(i32),
/// `aa_i2c_write` returned a negative status code.
#[error("I2C write failed (code {0})")]
I2cWriteFailed(i32),
/// `aa_i2c_read` returned a negative status code.
#[error("I2C read failed (code {0})")]
I2cReadFailed(i32),
/// `aa_spi_write` returned a negative status code.
#[error("SPI transfer failed (code {0})")]
SpiTransferFailed(i32),
/// GPIO operation returned a negative status code.
#[error("GPIO error (code {0})")]
GpioError(i32),
/// `aardvark.so` could not be found or loaded.
#[error("aardvark.so not found — set ZEROCLAW_AARDVARK_LIB or place it next to the binary")]
LibraryNotFound,
}
/// Convenience `Result` alias for this crate.
pub type Result<T> = std::result::Result<T, AardvarkError>;
// ── Handle ────────────────────────────────────────────────────────────────
/// Safe RAII handle over the Aardvark C library handle.
///
/// Automatically closes the adapter on `Drop`.
///
/// **Usage pattern:** open a fresh handle per command and let it drop at the
/// end of each operation (lazy-open / eager-close).
pub struct AardvarkHandle {
handle: i32,
}
impl AardvarkHandle {
// ── Lifecycle ─────────────────────────────────────────────────────────
/// Open the first available (free) Aardvark adapter.
pub fn open() -> Result<Self> {
let ports = Self::find_devices();
let port = ports.first().copied().ok_or(AardvarkError::NotFound)?;
Self::open_port(i32::from(port))
}
/// Open a specific Aardvark adapter by port index.
pub fn open_port(port: i32) -> Result<Self> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
let handle: i32 = unsafe {
let f: Symbol<unsafe extern "C" fn(i32) -> i32> = lib
.get(b"c_aa_open\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
f(port)
};
if handle <= 0 {
Err(AardvarkError::OpenFailed(handle))
} else {
Ok(Self { handle })
}
}
/// Return the port numbers of all **free** connected adapters.
///
/// Ports in-use by another process are filtered out.
/// Returns an empty `Vec` when `aardvark.so` cannot be loaded.
pub fn find_devices() -> Vec<u16> {
let Some(lib) = lib() else {
eprintln!("[aardvark-sys] find_devices: library not loaded");
return Vec::new();
};
let mut ports = [0u16; 16];
let n: i32 = unsafe {
let f: std::result::Result<Symbol<unsafe extern "C" fn(i32, *mut u16) -> i32>, _> =
lib.get(b"c_aa_find_devices\0");
match f {
Ok(f) => f(16, ports.as_mut_ptr()),
Err(e) => {
eprintln!("[aardvark-sys] find_devices: symbol lookup failed: {e}");
return Vec::new();
}
}
};
eprintln!(
"[aardvark-sys] find_devices: c_aa_find_devices returned {n}, ports={:?}",
&ports[..n.max(0) as usize]
);
if n <= 0 {
return Vec::new();
}
let free: Vec<u16> = ports[..n as usize]
.iter()
.filter(|&&p| (p & AA_PORT_NOT_FREE) == 0)
.copied()
.collect();
eprintln!("[aardvark-sys] find_devices: free ports={free:?}");
free
}
// ── I2C ───────────────────────────────────────────────────────────────
/// Enable I2C mode and set the bitrate (kHz).
pub fn i2c_enable(&self, bitrate_khz: u32) -> Result<()> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
unsafe {
let configure: Symbol<unsafe extern "C" fn(i32, i32) -> i32> = lib
.get(b"c_aa_configure\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
configure(self.handle, AA_CONFIG_GPIO_I2C);
let pullup: Symbol<unsafe extern "C" fn(i32, u8) -> i32> = lib
.get(b"c_aa_i2c_pullup\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
pullup(self.handle, AA_I2C_PULLUP_BOTH);
let bitrate: Symbol<unsafe extern "C" fn(i32, i32) -> i32> = lib
.get(b"c_aa_i2c_bitrate\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
bitrate(self.handle, bitrate_khz as i32);
}
Ok(())
}
/// Write `data` bytes to the I2C device at `addr`.
pub fn i2c_write(&self, addr: u8, data: &[u8]) -> Result<()> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
let ret: i32 = unsafe {
let f: Symbol<unsafe extern "C" fn(i32, u16, i32, u16, *const u8) -> i32> = lib
.get(b"c_aa_i2c_write\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
f(
self.handle,
u16::from(addr),
AA_I2C_NO_FLAGS,
data.len() as u16,
data.as_ptr(),
)
};
if ret < 0 {
Err(AardvarkError::I2cWriteFailed(ret))
} else {
Ok(())
}
}
/// Read `len` bytes from the I2C device at `addr`.
pub fn i2c_read(&self, addr: u8, len: usize) -> Result<Vec<u8>> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
let mut buf = vec![0u8; len];
let ret: i32 = unsafe {
let f: Symbol<unsafe extern "C" fn(i32, u16, i32, u16, *mut u8) -> i32> = lib
.get(b"c_aa_i2c_read\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
f(
self.handle,
u16::from(addr),
AA_I2C_NO_FLAGS,
len as u16,
buf.as_mut_ptr(),
)
};
if ret < 0 {
Err(AardvarkError::I2cReadFailed(ret))
} else {
Ok(buf)
}
}
/// Write then read — standard I2C register-read pattern.
pub fn i2c_write_read(&self, addr: u8, write_data: &[u8], read_len: usize) -> Result<Vec<u8>> {
self.i2c_write(addr, write_data)?;
self.i2c_read(addr, read_len)
}
/// Scan the I2C bus, returning addresses of all responding devices.
///
/// Probes `0x080x77` with a 1-byte read; returns addresses that ACK.
pub fn i2c_scan(&self) -> Vec<u8> {
let Some(lib) = lib() else {
return Vec::new();
};
let Ok(f): std::result::Result<
Symbol<unsafe extern "C" fn(i32, u16, i32, u16, *mut u8) -> i32>,
_,
> = (unsafe { lib.get(b"c_aa_i2c_read\0") }) else {
return Vec::new();
};
let mut found = Vec::new();
let mut buf = [0u8; 1];
for addr in 0x08u16..=0x77 {
let ret = unsafe { f(self.handle, addr, AA_I2C_NO_FLAGS, 1, buf.as_mut_ptr()) };
// ret > 0: bytes received → device ACKed
// ret == 0: NACK → no device at this address
// ret < 0: error code → skip
if ret > 0 {
found.push(addr as u8);
}
}
found
}
// ── SPI ───────────────────────────────────────────────────────────────
/// Enable SPI mode and set the bitrate (kHz).
pub fn spi_enable(&self, bitrate_khz: u32) -> Result<()> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
unsafe {
let configure: Symbol<unsafe extern "C" fn(i32, i32) -> i32> = lib
.get(b"c_aa_configure\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
configure(self.handle, AA_CONFIG_SPI_GPIO);
// SPI mode 0: polarity=rising/falling(0), phase=sample/setup(0), MSB first(0)
let spi_cfg: Symbol<unsafe extern "C" fn(i32, i32, i32, i32) -> i32> = lib
.get(b"c_aa_spi_configure\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
spi_cfg(self.handle, 0, 0, 0);
let bitrate: Symbol<unsafe extern "C" fn(i32, i32) -> i32> = lib
.get(b"c_aa_spi_bitrate\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
bitrate(self.handle, bitrate_khz as i32);
}
Ok(())
}
/// Full-duplex SPI transfer.
///
/// Sends `send` bytes; returns the simultaneously received bytes (same length).
pub fn spi_transfer(&self, send: &[u8]) -> Result<Vec<u8>> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
let mut recv = vec![0u8; send.len()];
// aa_spi_write(aardvark, out_num_bytes, data_out, in_num_bytes, data_in)
let ret: i32 = unsafe {
let f: Symbol<unsafe extern "C" fn(i32, u16, *const u8, u16, *mut u8) -> i32> = lib
.get(b"c_aa_spi_write\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
f(
self.handle,
send.len() as u16,
send.as_ptr(),
recv.len() as u16,
recv.as_mut_ptr(),
)
};
if ret < 0 {
Err(AardvarkError::SpiTransferFailed(ret))
} else {
Ok(recv)
}
}
// ── GPIO ──────────────────────────────────────────────────────────────
/// Set GPIO pin directions and output values.
///
/// `direction`: bitmask — `1` = output, `0` = input.
/// `value`: output state bitmask.
pub fn gpio_set(&self, direction: u8, value: u8) -> Result<()> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
unsafe {
let dir_f: Symbol<unsafe extern "C" fn(i32, u8) -> i32> = lib
.get(b"c_aa_gpio_direction\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
let d = dir_f(self.handle, direction);
if d < 0 {
return Err(AardvarkError::GpioError(d));
}
let set_f: Symbol<unsafe extern "C" fn(i32, u8) -> i32> =
lib.get(b"c_aa_gpio_set\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
let r = set_f(self.handle, value);
if r < 0 {
return Err(AardvarkError::GpioError(r));
}
}
Ok(())
}
/// Read the current GPIO pin states as a bitmask.
pub fn gpio_get(&self) -> Result<u8> {
let lib = lib().ok_or(AardvarkError::LibraryNotFound)?;
let ret: i32 = unsafe {
let f: Symbol<unsafe extern "C" fn(i32) -> i32> = lib
.get(b"c_aa_gpio_get\0")
.map_err(|_| AardvarkError::LibraryNotFound)?;
f(self.handle)
};
if ret < 0 {
Err(AardvarkError::GpioError(ret))
} else {
Ok(ret as u8)
}
}
}
impl Drop for AardvarkHandle {
fn drop(&mut self) {
if let Some(lib) = lib() {
unsafe {
if let Ok(f) = lib.get::<unsafe extern "C" fn(i32) -> i32>(b"c_aa_close\0") {
f(self.handle);
}
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn find_devices_does_not_panic() {
// With no adapter plugged in, must return empty without panicking.
let _ = AardvarkHandle::find_devices();
}
#[test]
fn open_returns_error_or_ok_depending_on_hardware() {
// With hardware connected: open() succeeds (Ok).
// Without hardware: returns LibraryNotFound, NotFound, or OpenFailed — any Err is fine.
// Both outcomes are valid; the important thing is no panic.
let _ = AardvarkHandle::open();
}
#[test]
fn open_port_returns_error_when_no_hardware() {
// Port 99 doesn't exist — must return an error regardless of whether hardware is connected.
assert!(AardvarkHandle::open_port(99).is_err());
}
#[test]
fn error_display_messages_are_human_readable() {
assert!(AardvarkError::NotFound
.to_string()
.to_lowercase()
.contains("not found"));
assert!(AardvarkError::OpenFailed(-1).to_string().contains("-1"));
assert!(AardvarkError::I2cWriteFailed(-3)
.to_string()
.contains("I2C write"));
assert!(AardvarkError::SpiTransferFailed(-2)
.to_string()
.contains("SPI"));
assert!(AardvarkError::LibraryNotFound
.to_string()
.contains("aardvark.so"));
}
}
+919
View File
@@ -0,0 +1,919 @@
/*=========================================================================
| Aardvark Interface Library
|--------------------------------------------------------------------------
| Copyright (c) 2003-2024 Total Phase, Inc.
| All rights reserved.
| www.totalphase.com
|
| Redistribution and use of this file in source and binary forms, with
| or without modification, are permitted provided that the following
| conditions are met:
|
| - Redistributions of source code must retain the above copyright
| notice, this list of conditions, and the following disclaimer.
|
| - Redistributions in binary form must reproduce the above copyright
| notice, this list of conditions, and the following disclaimer in the
| documentation or other materials provided with the distribution.
|
| - This file must only be used to interface with Total Phase products.
| The names of Total Phase and its contributors must not be used to
| endorse or promote products derived from this software.
|
| THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
| "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING BUT NOT
| LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
| FOR A PARTICULAR PURPOSE, ARE DISCLAIMED. IN NO EVENT WILL THE
| COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
| INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING
| BUT NOT LIMITED TO PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
| LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
| CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
| LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
| ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
| POSSIBILITY OF SUCH DAMAGE.
|--------------------------------------------------------------------------
| To access Total Phase Aardvark devices through the API:
|
| 1) Use one of the following shared objects:
| aardvark.so -- Linux or macOS shared object
| aardvark.dll -- Windows dynamic link library
|
| 2) Along with one of the following language modules:
| aardvark.c/h -- C/C++ API header file and interface module
| aardvark_py.py -- Python API
| aardvark.cs -- C# .NET source
| aardvark_net.dll -- Compiled .NET binding
| aardvark.bas -- Visual Basic 6 API
========================================================================*/
#ifndef __aardvark_h__
#define __aardvark_h__
#ifdef __cplusplus
extern "C" {
#endif
/*=========================================================================
| TYPEDEFS
========================================================================*/
#ifndef TOTALPHASE_DATA_TYPES
#define TOTALPHASE_DATA_TYPES
#ifndef _MSC_VER
/* C99-compliant compilers (GCC) */
#include <stdint.h>
typedef uint8_t u08;
typedef uint16_t u16;
typedef uint32_t u32;
typedef uint64_t u64;
typedef int8_t s08;
typedef int16_t s16;
typedef int32_t s32;
typedef int64_t s64;
#else
/* Microsoft compilers (Visual C++) */
typedef unsigned __int8 u08;
typedef unsigned __int16 u16;
typedef unsigned __int32 u32;
typedef unsigned __int64 u64;
typedef signed __int8 s08;
typedef signed __int16 s16;
typedef signed __int32 s32;
typedef signed __int64 s64;
#endif /* __MSC_VER */
typedef float f32;
typedef double f64;
#endif /* TOTALPHASE_DATA_TYPES */
/*=========================================================================
| DEBUG
========================================================================*/
/* Set the following macro to '1' for debugging */
#define AA_DEBUG 0
/*=========================================================================
| VERSION
========================================================================*/
#define AA_HEADER_VERSION 0x0600 /* v6.00 */
/*=========================================================================
| STATUS CODES
========================================================================*/
/*
* All API functions return an integer which is the result of the
* transaction, or a status code if negative. The status codes are
* defined as follows:
*/
enum AardvarkStatus {
/* General codes (0 to -99) */
AA_OK = 0,
AA_UNABLE_TO_LOAD_LIBRARY = -1,
AA_UNABLE_TO_LOAD_DRIVER = -2,
AA_UNABLE_TO_LOAD_FUNCTION = -3,
AA_INCOMPATIBLE_LIBRARY = -4,
AA_INCOMPATIBLE_DEVICE = -5,
AA_COMMUNICATION_ERROR = -6,
AA_UNABLE_TO_OPEN = -7,
AA_UNABLE_TO_CLOSE = -8,
AA_INVALID_HANDLE = -9,
AA_CONFIG_ERROR = -10,
/* I2C codes (-100 to -199) */
AA_I2C_NOT_AVAILABLE = -100,
AA_I2C_NOT_ENABLED = -101,
AA_I2C_READ_ERROR = -102,
AA_I2C_WRITE_ERROR = -103,
AA_I2C_SLAVE_BAD_CONFIG = -104,
AA_I2C_SLAVE_READ_ERROR = -105,
AA_I2C_SLAVE_TIMEOUT = -106,
AA_I2C_DROPPED_EXCESS_BYTES = -107,
AA_I2C_BUS_ALREADY_FREE = -108,
/* SPI codes (-200 to -299) */
AA_SPI_NOT_AVAILABLE = -200,
AA_SPI_NOT_ENABLED = -201,
AA_SPI_WRITE_ERROR = -202,
AA_SPI_SLAVE_READ_ERROR = -203,
AA_SPI_SLAVE_TIMEOUT = -204,
AA_SPI_DROPPED_EXCESS_BYTES = -205,
/* GPIO codes (-400 to -499) */
AA_GPIO_NOT_AVAILABLE = -400
};
#ifndef __cplusplus
typedef enum AardvarkStatus AardvarkStatus;
#endif
/*=========================================================================
| GENERAL TYPE DEFINITIONS
========================================================================*/
/* Aardvark handle type definition */
typedef int Aardvark;
/*
* Deprecated type definitions.
*
* These are only for use with legacy code and
* should not be used for new development.
*/
typedef u08 aa_u08;
typedef u16 aa_u16;
typedef u32 aa_u32;
typedef s08 aa_s08;
typedef s16 aa_s16;
typedef s32 aa_s32;
/*
* Aardvark version matrix.
*
* This matrix describes the various version dependencies
* of Aardvark components. It can be used to determine
* which component caused an incompatibility error.
*
* All version numbers are of the format:
* (major << 8) | minor
*
* ex. v1.20 would be encoded as: 0x0114
*/
struct AardvarkVersion {
/* Software, firmware, and hardware versions. */
u16 software;
u16 firmware;
u16 hardware;
/* Firmware requires that software must be >= this version. */
u16 sw_req_by_fw;
/* Software requires that firmware must be >= this version. */
u16 fw_req_by_sw;
/* Software requires that the API interface must be >= this version. */
u16 api_req_by_sw;
};
#ifndef __cplusplus
typedef struct AardvarkVersion AardvarkVersion;
#endif
/*=========================================================================
| GENERAL API
========================================================================*/
/*
* Get a list of ports to which Aardvark devices are attached.
*
* nelem = maximum number of elements to return
* devices = array into which the port numbers are returned
*
* Each element of the array is written with the port number.
* Devices that are in-use are ORed with AA_PORT_NOT_FREE (0x8000).
*
* ex. devices are attached to ports 0, 1, 2
* ports 0 and 2 are available, and port 1 is in-use.
* array => 0x0000, 0x8001, 0x0002
*
* If the array is NULL, it is not filled with any values.
* If there are more devices than the array size, only the
* first nmemb port numbers will be written into the array.
*
* Returns the number of devices found, regardless of the
* array size.
*/
#define AA_PORT_NOT_FREE 0x8000
int aa_find_devices (
int num_devices,
u16 * devices
);
/*
* Get a list of ports to which Aardvark devices are attached.
*
* This function is the same as aa_find_devices() except that
* it returns the unique IDs of each Aardvark device. The IDs
* are guaranteed to be non-zero if valid.
*
* The IDs are the unsigned integer representation of the 10-digit
* serial numbers.
*/
int aa_find_devices_ext (
int num_devices,
u16 * devices,
int num_ids,
u32 * unique_ids
);
/*
* Open the Aardvark port.
*
* The port number is a zero-indexed integer.
*
* The port number is the same as that obtained from the
* aa_find_devices() function above.
*
* Returns an Aardvark handle, which is guaranteed to be
* greater than zero if it is valid.
*
* This function is recommended for use in simple applications
* where extended information is not required. For more complex
* applications, the use of aa_open_ext() is recommended.
*/
Aardvark aa_open (
int port_number
);
/*
* Open the Aardvark port, returning extended information
* in the supplied structure. Behavior is otherwise identical
* to aa_open() above. If 0 is passed as the pointer to the
* structure, this function is exactly equivalent to aa_open().
*
* The structure is zeroed before the open is attempted.
* It is filled with whatever information is available.
*
* For example, if the firmware version is not filled, then
* the device could not be queried for its version number.
*
* This function is recommended for use in complex applications
* where extended information is required. For more simple
* applications, the use of aa_open() is recommended.
*/
struct AardvarkExt {
/* Version matrix */
AardvarkVersion version;
/* Features of this device. */
int features;
};
#ifndef __cplusplus
typedef struct AardvarkExt AardvarkExt;
#endif
Aardvark aa_open_ext (
int port_number,
AardvarkExt * aa_ext
);
/* Close the Aardvark port. */
int aa_close (
Aardvark aardvark
);
/*
* Return the port for this Aardvark handle.
*
* The port number is a zero-indexed integer.
*/
int aa_port (
Aardvark aardvark
);
/*
* Return the device features as a bit-mask of values, or
* an error code if the handle is not valid.
*/
#define AA_FEATURE_SPI 0x00000001
#define AA_FEATURE_I2C 0x00000002
#define AA_FEATURE_GPIO 0x00000008
int aa_features (
Aardvark aardvark
);
/*
* Return the unique ID for this Aardvark adapter.
* IDs are guaranteed to be non-zero if valid.
* The ID is the unsigned integer representation of the
* 10-digit serial number.
*/
u32 aa_unique_id (
Aardvark aardvark
);
/*
* Return the status string for the given status code.
* If the code is not valid or the library function cannot
* be loaded, return a NULL string.
*/
const char * aa_status_string (
int status
);
/*
* Enable logging to a file. The handle must be standard file
* descriptor. In C, a file descriptor can be obtained by using
* the ANSI C function "open" or by using the function "fileno"
* on a FILE* stream. A FILE* stream can be obtained using "fopen"
* or can correspond to the common "stdout" or "stderr" --
* available when including stdlib.h
*/
#define AA_LOG_STDOUT 1
#define AA_LOG_STDERR 2
int aa_log (
Aardvark aardvark,
int level,
int handle
);
/*
* Return the version matrix for the device attached to the
* given handle. If the handle is 0 or invalid, only the
* software and required api versions are set.
*/
int aa_version (
Aardvark aardvark,
AardvarkVersion * version
);
/*
* Configure the device by enabling/disabling I2C, SPI, and
* GPIO functions.
*/
enum AardvarkConfig {
AA_CONFIG_GPIO_ONLY = 0x00,
AA_CONFIG_SPI_GPIO = 0x01,
AA_CONFIG_GPIO_I2C = 0x02,
AA_CONFIG_SPI_I2C = 0x03,
AA_CONFIG_QUERY = 0x80
};
#ifndef __cplusplus
typedef enum AardvarkConfig AardvarkConfig;
#endif
#define AA_CONFIG_SPI_MASK 0x00000001
#define AA_CONFIG_I2C_MASK 0x00000002
int aa_configure (
Aardvark aardvark,
AardvarkConfig config
);
/*
* Configure the target power pins.
* This is only supported on hardware versions >= 2.00
*/
#define AA_TARGET_POWER_NONE 0x00
#define AA_TARGET_POWER_BOTH 0x03
#define AA_TARGET_POWER_QUERY 0x80
int aa_target_power (
Aardvark aardvark,
u08 power_mask
);
/*
* Sleep for the specified number of milliseconds
* Accuracy depends on the operating system scheduler
* Returns the number of milliseconds slept
*/
u32 aa_sleep_ms (
u32 milliseconds
);
/*=========================================================================
| ASYNC MESSAGE POLLING
========================================================================*/
/*
* Polling function to check if there are any asynchronous
* messages pending for processing. The function takes a timeout
* value in units of milliseconds. If the timeout is < 0, the
* function will block until data is received. If the timeout is 0,
* the function will perform a non-blocking check.
*/
#define AA_ASYNC_NO_DATA 0x00000000
#define AA_ASYNC_I2C_READ 0x00000001
#define AA_ASYNC_I2C_WRITE 0x00000002
#define AA_ASYNC_SPI 0x00000004
int aa_async_poll (
Aardvark aardvark,
int timeout
);
/*=========================================================================
| I2C API
========================================================================*/
/* Free the I2C bus. */
int aa_i2c_free_bus (
Aardvark aardvark
);
/*
* Set the I2C bit rate in kilohertz. If a zero is passed as the
* bitrate, the bitrate is unchanged and the current bitrate is
* returned.
*/
int aa_i2c_bitrate (
Aardvark aardvark,
int bitrate_khz
);
/*
* Set the bus lock timeout. If a zero is passed as the timeout,
* the timeout is unchanged and the current timeout is returned.
*/
int aa_i2c_bus_timeout (
Aardvark aardvark,
u16 timeout_ms
);
enum AardvarkI2cFlags {
AA_I2C_NO_FLAGS = 0x00,
AA_I2C_10_BIT_ADDR = 0x01,
AA_I2C_COMBINED_FMT = 0x02,
AA_I2C_NO_STOP = 0x04,
AA_I2C_SIZED_READ = 0x10,
AA_I2C_SIZED_READ_EXTRA1 = 0x20
};
#ifndef __cplusplus
typedef enum AardvarkI2cFlags AardvarkI2cFlags;
#endif
/* Read a stream of bytes from the I2C slave device. */
int aa_i2c_read (
Aardvark aardvark,
u16 slave_addr,
AardvarkI2cFlags flags,
u16 num_bytes,
u08 * data_in
);
enum AardvarkI2cStatus {
AA_I2C_STATUS_OK = 0,
AA_I2C_STATUS_BUS_ERROR = 1,
AA_I2C_STATUS_SLA_ACK = 2,
AA_I2C_STATUS_SLA_NACK = 3,
AA_I2C_STATUS_DATA_NACK = 4,
AA_I2C_STATUS_ARB_LOST = 5,
AA_I2C_STATUS_BUS_LOCKED = 6,
AA_I2C_STATUS_LAST_DATA_ACK = 7
};
#ifndef __cplusplus
typedef enum AardvarkI2cStatus AardvarkI2cStatus;
#endif
/*
* Read a stream of bytes from the I2C slave device.
* This API function returns the number of bytes read into
* the num_read variable. The return value of the function
* is a status code.
*/
int aa_i2c_read_ext (
Aardvark aardvark,
u16 slave_addr,
AardvarkI2cFlags flags,
u16 num_bytes,
u08 * data_in,
u16 * num_read
);
/* Write a stream of bytes to the I2C slave device. */
int aa_i2c_write (
Aardvark aardvark,
u16 slave_addr,
AardvarkI2cFlags flags,
u16 num_bytes,
const u08 * data_out
);
/*
* Write a stream of bytes to the I2C slave device.
* This API function returns the number of bytes written into
* the num_written variable. The return value of the function
* is a status code.
*/
int aa_i2c_write_ext (
Aardvark aardvark,
u16 slave_addr,
AardvarkI2cFlags flags,
u16 num_bytes,
const u08 * data_out,
u16 * num_written
);
/*
* Do an atomic write+read to an I2C slave device by first
* writing a stream of bytes to the I2C slave device and then
* reading a stream of bytes back from the same slave device.
* This API function returns the number of bytes written into
* the num_written variable and the number of bytes read into
* the num_read variable. The return value of the function is
* the status given as (read_status << 8) | (write_status).
*/
int aa_i2c_write_read (
Aardvark aardvark,
u16 slave_addr,
AardvarkI2cFlags flags,
u16 out_num_bytes,
const u08 * out_data,
u16 * num_written,
u16 in_num_bytes,
u08 * in_data,
u16 * num_read
);
/* Enable/Disable the Aardvark as an I2C slave device */
int aa_i2c_slave_enable (
Aardvark aardvark,
u08 addr,
u16 maxTxBytes,
u16 maxRxBytes
);
int aa_i2c_slave_disable (
Aardvark aardvark
);
/*
* Set the slave response in the event the Aardvark is put
* into slave mode and contacted by a Master.
*/
int aa_i2c_slave_set_response (
Aardvark aardvark,
u08 num_bytes,
const u08 * data_out
);
/*
* Return number of bytes written from a previous
* Aardvark->I2C_master transmission. Since the transmission is
* happening asynchronously with respect to the PC host
* software, there could be responses queued up from many
* previous write transactions.
*/
int aa_i2c_slave_write_stats (
Aardvark aardvark
);
/* Read the bytes from an I2C slave reception */
int aa_i2c_slave_read (
Aardvark aardvark,
u08 * addr,
u16 num_bytes,
u08 * data_in
);
/* Extended functions that return status code */
int aa_i2c_slave_write_stats_ext (
Aardvark aardvark,
u16 * num_written
);
int aa_i2c_slave_read_ext (
Aardvark aardvark,
u08 * addr,
u16 num_bytes,
u08 * data_in,
u16 * num_read
);
/*
* Configure the I2C pullup resistors.
* This is only supported on hardware versions >= 2.00
*/
#define AA_I2C_PULLUP_NONE 0x00
#define AA_I2C_PULLUP_BOTH 0x03
#define AA_I2C_PULLUP_QUERY 0x80
int aa_i2c_pullup (
Aardvark aardvark,
u08 pullup_mask
);
/*=========================================================================
| SPI API
========================================================================*/
/*
* Set the SPI bit rate in kilohertz. If a zero is passed as the
* bitrate, the bitrate is unchanged and the current bitrate is
* returned.
*/
int aa_spi_bitrate (
Aardvark aardvark,
int bitrate_khz
);
/*
* These configuration parameters specify how to clock the
* bits that are sent and received on the Aardvark SPI
* interface.
*
* The polarity option specifies which transition
* constitutes the leading edge and which transition is the
* falling edge. For example, AA_SPI_POL_RISING_FALLING
* would configure the SPI to idle the SCK clock line low.
* The clock would then transition low-to-high on the
* leading edge and high-to-low on the trailing edge.
*
* The phase option determines whether to sample or setup on
* the leading edge. For example, AA_SPI_PHASE_SAMPLE_SETUP
* would configure the SPI to sample on the leading edge and
* setup on the trailing edge.
*
* The bitorder option is used to indicate whether LSB or
* MSB is shifted first.
*
* See the diagrams in the Aardvark datasheet for
* more details.
*/
enum AardvarkSpiPolarity {
AA_SPI_POL_RISING_FALLING = 0,
AA_SPI_POL_FALLING_RISING = 1
};
#ifndef __cplusplus
typedef enum AardvarkSpiPolarity AardvarkSpiPolarity;
#endif
enum AardvarkSpiPhase {
AA_SPI_PHASE_SAMPLE_SETUP = 0,
AA_SPI_PHASE_SETUP_SAMPLE = 1
};
#ifndef __cplusplus
typedef enum AardvarkSpiPhase AardvarkSpiPhase;
#endif
enum AardvarkSpiBitorder {
AA_SPI_BITORDER_MSB = 0,
AA_SPI_BITORDER_LSB = 1
};
#ifndef __cplusplus
typedef enum AardvarkSpiBitorder AardvarkSpiBitorder;
#endif
/* Configure the SPI master or slave interface */
int aa_spi_configure (
Aardvark aardvark,
AardvarkSpiPolarity polarity,
AardvarkSpiPhase phase,
AardvarkSpiBitorder bitorder
);
/* Write a stream of bytes to the downstream SPI slave device. */
int aa_spi_write (
Aardvark aardvark,
u16 out_num_bytes,
const u08 * data_out,
u16 in_num_bytes,
u08 * data_in
);
/* Enable/Disable the Aardvark as an SPI slave device */
int aa_spi_slave_enable (
Aardvark aardvark
);
int aa_spi_slave_disable (
Aardvark aardvark
);
/*
* Set the slave response in the event the Aardvark is put
* into slave mode and contacted by a Master.
*/
int aa_spi_slave_set_response (
Aardvark aardvark,
u08 num_bytes,
const u08 * data_out
);
/* Read the bytes from an SPI slave reception */
int aa_spi_slave_read (
Aardvark aardvark,
u16 num_bytes,
u08 * data_in
);
/*
* Change the output polarity on the SS line.
*
* Note: When configured as an SPI slave, the Aardvark will
* always be setup with SS as active low. Hence this function
* only affects the SPI master functions on the Aardvark.
*/
enum AardvarkSpiSSPolarity {
AA_SPI_SS_ACTIVE_LOW = 0,
AA_SPI_SS_ACTIVE_HIGH = 1
};
#ifndef __cplusplus
typedef enum AardvarkSpiSSPolarity AardvarkSpiSSPolarity;
#endif
int aa_spi_master_ss_polarity (
Aardvark aardvark,
AardvarkSpiSSPolarity polarity
);
/*=========================================================================
| GPIO API
========================================================================*/
/*
* The following enumerated type maps the named lines on the
* Aardvark I2C/SPI line to bit positions in the GPIO API.
* All GPIO API functions will index these lines through an
* 8-bit masked value. Thus, each bit position in the mask
* can be referred back its corresponding line through the
* enumerated type.
*/
enum AardvarkGpioBits {
AA_GPIO_SCL = 0x01,
AA_GPIO_SDA = 0x02,
AA_GPIO_MISO = 0x04,
AA_GPIO_SCK = 0x08,
AA_GPIO_MOSI = 0x10,
AA_GPIO_SS = 0x20
};
#ifndef __cplusplus
typedef enum AardvarkGpioBits AardvarkGpioBits;
#endif
/*
* Configure the GPIO, specifying the direction of each bit.
*
* A call to this function will not change the value of the pullup
* mask in the Aardvark. This is illustrated by the following
* example:
* (1) Direction mask is first set to 0x00
* (2) Pullup is set to 0x01
* (3) Direction mask is set to 0x01
* (4) Direction mask is later set back to 0x00.
*
* The pullup will be active after (4).
*
* On Aardvark power-up, the default value of the direction
* mask is 0x00.
*/
#define AA_GPIO_DIR_INPUT 0
#define AA_GPIO_DIR_OUTPUT 1
int aa_gpio_direction (
Aardvark aardvark,
u08 direction_mask
);
/*
* Enable an internal pullup on any of the GPIO input lines.
*
* Note: If a line is configured as an output, the pullup bit
* for that line will be ignored, though that pullup bit will
* be cached in case the line is later configured as an input.
*
* By default the pullup mask is 0x00.
*/
#define AA_GPIO_PULLUP_OFF 0
#define AA_GPIO_PULLUP_ON 1
int aa_gpio_pullup (
Aardvark aardvark,
u08 pullup_mask
);
/*
* Read the current digital values on the GPIO input lines.
*
* The bits will be ordered as described by AA_GPIO_BITS. If a
* line is configured as an output, its corresponding bit
* position in the mask will be undefined.
*/
int aa_gpio_get (
Aardvark aardvark
);
/*
* Set the outputs on the GPIO lines.
*
* Note: If a line is configured as an input, it will not be
* affected by this call, but the output value for that line
* will be cached in the event that the line is later
* configured as an output.
*/
int aa_gpio_set (
Aardvark aardvark,
u08 value
);
/*
* Block until there is a change on the GPIO input lines.
* Pins configured as outputs will be ignored.
*
* The function will return either when a change has occurred or
* the timeout expires. The timeout, specified in millisecods, has
* a precision of ~16 ms. The maximum allowable timeout is
* approximately 4 seconds. If the timeout expires, this function
* will return the current state of the GPIO lines.
*
* This function will return immediately with the current value
* of the GPIO lines for the first invocation after any of the
* following functions are called: aa_configure,
* aa_gpio_direction, or aa_gpio_pullup.
*
* If the function aa_gpio_get is called before calling
* aa_gpio_change, aa_gpio_change will only register any changes
* from the value last returned by aa_gpio_get.
*/
int aa_gpio_change (
Aardvark aardvark,
u16 timeout
);
#ifdef __cplusplus
}
#endif
#endif /* __aardvark_h__ */
Binary file not shown.
+80
View File
@@ -0,0 +1,80 @@
#!/bin/bash
# Start mem0 + reranker GPU container for ZeroClaw memory backend.
#
# Required env vars:
# MEM0_LLM_API_KEY or ZAI_API_KEY — API key for the LLM used in fact extraction
#
# Optional env vars (with defaults):
# MEM0_LLM_PROVIDER — mem0 LLM provider (default: "openai" i.e. OpenAI-compatible)
# MEM0_LLM_MODEL — LLM model for fact extraction (default: "glm-5-turbo")
# MEM0_LLM_BASE_URL — LLM API base URL (default: "https://api.z.ai/api/coding/paas/v4")
# MEM0_EMBEDDER_MODEL — embedding model (default: "BAAI/bge-m3")
# MEM0_EMBEDDER_DIMS — embedding dimensions (default: "1024")
# MEM0_EMBEDDER_DEVICE — "cuda", "cpu", or "auto" (default: "cuda")
# MEM0_VECTOR_COLLECTION — Qdrant collection name (default: "zeroclaw_mem0")
# RERANKER_MODEL — reranker model (default: "BAAI/bge-reranker-v2-m3")
# RERANKER_DEVICE — "cuda" or "cpu" (default: "cuda")
# MEM0_PORT — mem0 server port (default: 8765)
# RERANKER_PORT — reranker server port (default: 8678)
# CONTAINER_IMAGE — base container image (default: docker.io/kyuz0/amd-strix-halo-comfyui:latest)
# CONTAINER_NAME — container name (default: mem0-gpu)
# DATA_DIR — host path for Qdrant data (default: ~/mem0-data)
# SCRIPT_DIR — host path for server scripts (default: directory of this script)
set -e
# Resolve script directory for mounting server scripts
SCRIPT_DIR="${SCRIPT_DIR:-$(cd "$(dirname "$0")" && pwd)}"
# API key — accept either name
export MEM0_LLM_API_KEY="${MEM0_LLM_API_KEY:-${ZAI_API_KEY:?MEM0_LLM_API_KEY or ZAI_API_KEY must be set}}"
# Defaults
MEM0_LLM_MODEL="${MEM0_LLM_MODEL:-glm-5-turbo}"
MEM0_LLM_BASE_URL="${MEM0_LLM_BASE_URL:-https://api.z.ai/api/coding/paas/v4}"
MEM0_PORT="${MEM0_PORT:-8765}"
RERANKER_PORT="${RERANKER_PORT:-8678}"
CONTAINER_IMAGE="${CONTAINER_IMAGE:-docker.io/kyuz0/amd-strix-halo-comfyui:latest}"
CONTAINER_NAME="${CONTAINER_NAME:-mem0-gpu}"
DATA_DIR="${DATA_DIR:-$HOME/mem0-data}"
# Stop existing CPU services (if any)
kill -9 $(pgrep -f "mem0-server.py") 2>/dev/null || true
kill -9 $(pgrep -f "reranker-server.py") 2>/dev/null || true
# Stop existing container
podman stop "$CONTAINER_NAME" 2>/dev/null || true
podman rm "$CONTAINER_NAME" 2>/dev/null || true
podman run -d --name "$CONTAINER_NAME" \
--device /dev/dri --device /dev/kfd \
--group-add video --group-add render \
--restart unless-stopped \
-p "$MEM0_PORT:$MEM0_PORT" -p "$RERANKER_PORT:$RERANKER_PORT" \
-v "$DATA_DIR":/root/mem0-data:Z \
-v "$SCRIPT_DIR/mem0-server.py":/app/mem0-server.py:ro,Z \
-v "$SCRIPT_DIR/reranker-server.py":/app/reranker-server.py:ro,Z \
-v "$HOME/.cache/huggingface":/root/.cache/huggingface:Z \
-e MEM0_LLM_API_KEY="$MEM0_LLM_API_KEY" \
-e ZAI_API_KEY="$MEM0_LLM_API_KEY" \
-e MEM0_LLM_MODEL="$MEM0_LLM_MODEL" \
-e MEM0_LLM_BASE_URL="$MEM0_LLM_BASE_URL" \
${MEM0_LLM_PROVIDER:+-e MEM0_LLM_PROVIDER="$MEM0_LLM_PROVIDER"} \
${MEM0_EMBEDDER_MODEL:+-e MEM0_EMBEDDER_MODEL="$MEM0_EMBEDDER_MODEL"} \
${MEM0_EMBEDDER_DIMS:+-e MEM0_EMBEDDER_DIMS="$MEM0_EMBEDDER_DIMS"} \
${MEM0_EMBEDDER_DEVICE:+-e MEM0_EMBEDDER_DEVICE="$MEM0_EMBEDDER_DEVICE"} \
${MEM0_VECTOR_COLLECTION:+-e MEM0_VECTOR_COLLECTION="$MEM0_VECTOR_COLLECTION"} \
${RERANKER_MODEL:+-e RERANKER_MODEL="$RERANKER_MODEL"} \
${RERANKER_DEVICE:+-e RERANKER_DEVICE="$RERANKER_DEVICE"} \
-e RERANKER_PORT="$RERANKER_PORT" \
-e RERANKER_URL="http://127.0.0.1:$RERANKER_PORT/rerank" \
-e TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 \
-e HOME=/root \
"$CONTAINER_IMAGE" \
bash -c "pip install -q FlagEmbedding mem0ai flask httpx qdrant-client 2>&1 | tail -3; echo '=== Starting reranker (GPU) on :$RERANKER_PORT ==='; python3 /app/reranker-server.py & sleep 3; echo '=== Starting mem0 (GPU) on :$MEM0_PORT ==='; exec python3 /app/mem0-server.py"
echo "Container started, waiting for init..."
sleep 15
echo "=== Container logs ==="
podman logs "$CONTAINER_NAME" 2>&1 | tail -25
echo "=== Port check ==="
ss -tlnp | grep "$MEM0_PORT\|$RERANKER_PORT" || echo "Ports not yet ready, check: podman logs $CONTAINER_NAME"
+288
View File
@@ -0,0 +1,288 @@
"""Minimal OpenMemory-compatible REST server wrapping mem0 Python SDK."""
import asyncio
import json, os, uuid, httpx
from datetime import datetime, timezone
from fastapi import FastAPI, Query
from pydantic import BaseModel
from typing import Optional
from mem0 import Memory
app = FastAPI()
RERANKER_URL = os.environ.get("RERANKER_URL", "http://127.0.0.1:8678/rerank")
CUSTOM_EXTRACTION_PROMPT = """You are a memory extraction specialist for a Cantonese/Chinese chat assistant.
Extract ONLY important, persistent facts from the conversation. Rules:
1. Extract personal preferences, habits, relationships, names, locations
2. Extract decisions, plans, and commitments people make
3. SKIP small talk, greetings, reactions ("ok", "哈哈", "係呀")
4. SKIP temporary states ("我依家食緊飯") unless they reveal a habit
5. Keep facts in the ORIGINAL language (Cantonese/Chinese/English)
6. For each fact, note WHO it's about (use their name or identifier if known)
7. Merge/update existing facts rather than creating duplicates
Return a list of facts in JSON format: {"facts": ["fact1", "fact2", ...]}
"""
PROCEDURAL_EXTRACTION_PROMPT = """You are a procedural memory specialist for an AI assistant.
Extract HOW-TO patterns and reusable procedures from the conversation trace. Rules:
1. Identify step-by-step procedures the assistant followed to accomplish a task
2. Extract tool usage patterns: which tools were called, in what order, with what arguments
3. Capture decision points: why the assistant chose one approach over another
4. Note error-recovery patterns: what failed, how it was fixed
5. Keep the procedure generic enough to apply to similar future tasks
6. Preserve technical details (commands, file paths, API calls) that are reusable
7. SKIP greetings, small talk, and conversational filler
8. Format each procedure as: "To [goal]: [step1] -> [step2] -> ... -> [result]"
Return a list of procedures in JSON format: {"facts": ["procedure1", "procedure2", ...]}
"""
# ── Configurable via environment variables ─────────────────────────
# LLM (for fact extraction when infer=true)
MEM0_LLM_PROVIDER = os.environ.get("MEM0_LLM_PROVIDER", "openai") # "openai" (compatible), "anthropic", etc.
MEM0_LLM_MODEL = os.environ.get("MEM0_LLM_MODEL", "glm-5-turbo")
MEM0_LLM_API_KEY = os.environ.get("MEM0_LLM_API_KEY") or os.environ.get("ZAI_API_KEY", "")
MEM0_LLM_BASE_URL = os.environ.get("MEM0_LLM_BASE_URL", "https://api.z.ai/api/coding/paas/v4")
# Embedder
MEM0_EMBEDDER_PROVIDER = os.environ.get("MEM0_EMBEDDER_PROVIDER", "huggingface") # "huggingface", "openai", etc.
MEM0_EMBEDDER_MODEL = os.environ.get("MEM0_EMBEDDER_MODEL", "BAAI/bge-m3")
MEM0_EMBEDDER_DIMS = int(os.environ.get("MEM0_EMBEDDER_DIMS", "1024"))
MEM0_EMBEDDER_DEVICE = os.environ.get("MEM0_EMBEDDER_DEVICE", "cuda") # "cuda", "cpu", "auto"
# Vector store
MEM0_VECTOR_PROVIDER = os.environ.get("MEM0_VECTOR_PROVIDER", "qdrant") # "qdrant", "chroma", etc.
MEM0_VECTOR_COLLECTION = os.environ.get("MEM0_VECTOR_COLLECTION", "zeroclaw_mem0")
MEM0_VECTOR_PATH = os.environ.get("MEM0_VECTOR_PATH", os.path.expanduser("~/mem0-data/qdrant"))
config = {
"llm": {
"provider": MEM0_LLM_PROVIDER,
"config": {
"model": MEM0_LLM_MODEL,
"api_key": MEM0_LLM_API_KEY,
"openai_base_url": MEM0_LLM_BASE_URL,
},
},
"embedder": {
"provider": MEM0_EMBEDDER_PROVIDER,
"config": {
"model": MEM0_EMBEDDER_MODEL,
"embedding_dims": MEM0_EMBEDDER_DIMS,
"model_kwargs": {"device": MEM0_EMBEDDER_DEVICE},
},
},
"vector_store": {
"provider": MEM0_VECTOR_PROVIDER,
"config": {
"collection_name": MEM0_VECTOR_COLLECTION,
"embedding_model_dims": MEM0_EMBEDDER_DIMS,
"path": MEM0_VECTOR_PATH,
},
},
"custom_fact_extraction_prompt": CUSTOM_EXTRACTION_PROMPT,
}
m = Memory.from_config(config)
def rerank_results(query: str, items: list, top_k: int = 10) -> list:
"""Rerank search results using bge-reranker-v2-m3."""
if not items:
return items
documents = [item.get("memory", "") for item in items]
try:
resp = httpx.post(
RERANKER_URL,
json={"query": query, "documents": documents, "top_k": top_k},
timeout=10.0,
)
resp.raise_for_status()
ranked = resp.json().get("results", [])
return [items[r["index"]] for r in ranked]
except Exception as e:
print(f"Reranker failed, using original order: {e}")
return items
class AddMemoryRequest(BaseModel):
user_id: str
text: str
metadata: Optional[dict] = None
infer: bool = True
app: Optional[str] = None
custom_instructions: Optional[str] = None
@app.post("/api/v1/memories/")
async def add_memory(req: AddMemoryRequest):
# Use client-supplied prompt, fall back to server default, then mem0 SDK default
prompt = req.custom_instructions or CUSTOM_EXTRACTION_PROMPT
result = await asyncio.to_thread(m.add, req.text, user_id=req.user_id, metadata=req.metadata or {}, prompt=prompt)
return {"id": str(uuid.uuid4()), "status": "ok", "result": result}
class ProceduralMemoryRequest(BaseModel):
user_id: str
messages: list[dict]
metadata: Optional[dict] = None
@app.post("/api/v1/memories/procedural")
async def add_procedural_memory(req: ProceduralMemoryRequest):
"""Store a conversation trace as procedural memory.
Accepts a list of messages (role/content dicts) representing a full
conversation turn including tool calls, then uses mem0's native
procedural memory extraction to learn reusable "how to" patterns.
"""
# Build metadata with procedural type marker
meta = {"type": "procedural"}
if req.metadata:
meta.update(req.metadata)
# Use mem0's native message list support + procedural prompt
result = await asyncio.to_thread(m.add,
req.messages,
user_id=req.user_id,
metadata=meta,
prompt=PROCEDURAL_EXTRACTION_PROMPT,
)
return {"id": str(uuid.uuid4()), "status": "ok", "result": result}
def _parse_mem0_results(raw_results) -> list:
raw = raw_results.get("results", raw_results) if isinstance(raw_results, dict) else raw_results
items = []
for r in raw:
item = r if isinstance(r, dict) else {"memory": str(r)}
items.append({
"id": item.get("id", str(uuid.uuid4())),
"memory": item.get("memory", item.get("text", "")),
"created_at": item.get("created_at", datetime.now(timezone.utc).isoformat()),
"metadata_": item.get("metadata", {}),
})
return items
def _parse_iso_timestamp(value: str) -> Optional[datetime]:
"""Parse an ISO 8601 timestamp string, returning None on failure."""
try:
dt = datetime.fromisoformat(value)
if dt.tzinfo is None:
dt = dt.replace(tzinfo=timezone.utc)
return dt
except (ValueError, TypeError):
return None
def _item_created_at(item: dict) -> Optional[datetime]:
"""Extract created_at from an item as a timezone-aware datetime."""
raw = item.get("created_at")
if raw is None:
return None
if isinstance(raw, datetime):
if raw.tzinfo is None:
raw = raw.replace(tzinfo=timezone.utc)
return raw
return _parse_iso_timestamp(str(raw))
def _apply_post_filters(
items: list,
created_after: Optional[str],
created_before: Optional[str],
) -> list:
"""Filter items by created_after / created_before timestamps (post-query)."""
after_dt = _parse_iso_timestamp(created_after) if created_after else None
before_dt = _parse_iso_timestamp(created_before) if created_before else None
if after_dt is None and before_dt is None:
return items
filtered = []
for item in items:
ts = _item_created_at(item)
if ts is None:
# Keep items without a parseable timestamp
filtered.append(item)
continue
if after_dt and ts < after_dt:
continue
if before_dt and ts > before_dt:
continue
filtered.append(item)
return filtered
@app.get("/api/v1/memories/")
async def list_or_search_memories(
user_id: str = Query(...),
search_query: Optional[str] = Query(None),
size: int = Query(10),
rerank: bool = Query(True),
created_after: Optional[str] = Query(None),
created_before: Optional[str] = Query(None),
metadata_filter: Optional[str] = Query(None),
):
# Build mem0 SDK filters dict from metadata_filter JSON param
sdk_filters = None
if metadata_filter:
try:
sdk_filters = json.loads(metadata_filter)
except json.JSONDecodeError:
sdk_filters = None
if search_query:
# Fetch more results than needed so reranker has candidates to work with
fetch_size = min(size * 3, 50)
results = await asyncio.to_thread(m.search,
search_query,
user_id=user_id,
limit=fetch_size,
filters=sdk_filters,
)
items = _parse_mem0_results(results)
items = _apply_post_filters(items, created_after, created_before)
if rerank and items:
items = rerank_results(search_query, items, top_k=size)
else:
items = items[:size]
return {"items": items, "total": len(items)}
else:
results = await asyncio.to_thread(m.get_all,user_id=user_id, filters=sdk_filters)
items = _parse_mem0_results(results)
items = _apply_post_filters(items, created_after, created_before)
return {"items": items, "total": len(items)}
@app.delete("/api/v1/memories/{memory_id}")
async def delete_memory(memory_id: str):
try:
await asyncio.to_thread(m.delete, memory_id)
except Exception:
pass
return {"status": "ok"}
@app.get("/api/v1/memories/{memory_id}/history")
async def get_memory_history(memory_id: str):
"""Return the edit history of a specific memory."""
try:
history = await asyncio.to_thread(m.history, memory_id)
# Normalize to list of dicts
entries = []
raw = history if isinstance(history, list) else history.get("results", history) if isinstance(history, dict) else [history]
for h in raw:
entry = h if isinstance(h, dict) else {"event": str(h)}
entries.append(entry)
return {"memory_id": memory_id, "history": entries}
except Exception as e:
return {"memory_id": memory_id, "history": [], "error": str(e)}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8765)
+50
View File
@@ -0,0 +1,50 @@
from flask import Flask, request, jsonify
from FlagEmbedding import FlagReranker
import os, torch
app = Flask(__name__)
reranker = None
# ── Configurable via environment variables ─────────────────────────
RERANKER_MODEL = os.environ.get("RERANKER_MODEL", "BAAI/bge-reranker-v2-m3")
RERANKER_DEVICE = os.environ.get("RERANKER_DEVICE", "cuda" if torch.cuda.is_available() else "cpu")
RERANKER_PORT = int(os.environ.get("RERANKER_PORT", "8678"))
def get_reranker():
global reranker
if reranker is None:
reranker = FlagReranker(RERANKER_MODEL, use_fp16=True, device=RERANKER_DEVICE)
return reranker
@app.route('/rerank', methods=['POST'])
def rerank():
data = request.json
query = data.get('query', '')
documents = data.get('documents', [])
top_k = data.get('top_k', len(documents))
if not query or not documents:
return jsonify({'error': 'query and documents required'}), 400
pairs = [[query, doc] for doc in documents]
scores = get_reranker().compute_score(pairs)
if isinstance(scores, float):
scores = [scores]
results = sorted(
[{'index': i, 'document': doc, 'score': score}
for i, (doc, score) in enumerate(zip(documents, scores))],
key=lambda x: x['score'], reverse=True
)[:top_k]
return jsonify({'results': results})
@app.route('/health', methods=['GET'])
def health():
return jsonify({'status': 'ok', 'model': RERANKER_MODEL, 'device': RERANKER_DEVICE})
if __name__ == '__main__':
print(f'Loading reranker model ({RERANKER_MODEL}) on {RERANKER_DEVICE}...')
get_reranker()
print(f'Reranker server ready on :{RERANKER_PORT}')
app.run(host='0.0.0.0', port=RERANKER_PORT)
+2 -2
View File
@@ -1,6 +1,6 @@
pkgbase = zeroclaw
pkgdesc = Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.
pkgver = 0.4.3
pkgver = 0.5.6
pkgrel = 1
url = https://github.com/zeroclaw-labs/zeroclaw
arch = x86_64
@@ -10,7 +10,7 @@ pkgbase = zeroclaw
makedepends = git
depends = gcc-libs
depends = openssl
source = zeroclaw-0.4.3.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v0.4.3.tar.gz
source = zeroclaw-0.5.6.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v0.5.6.tar.gz
sha256sums = SKIP
pkgname = zeroclaw
+1 -1
View File
@@ -1,6 +1,6 @@
# Maintainer: zeroclaw-labs <bot@zeroclaw.dev>
pkgname=zeroclaw
pkgver=0.4.3
pkgver=0.5.6
pkgrel=1
pkgdesc="Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant."
arch=('x86_64')
+2 -2
View File
@@ -1,11 +1,11 @@
{
"version": "0.5.2",
"version": "0.5.6",
"description": "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.",
"homepage": "https://github.com/zeroclaw-labs/zeroclaw",
"license": "MIT|Apache-2.0",
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v0.5.2/zeroclaw-x86_64-pc-windows-msvc.zip",
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v0.5.6/zeroclaw-x86_64-pc-windows-msvc.zip",
"hash": "",
"bin": "zeroclaw.exe"
}
+325
View File
@@ -0,0 +1,325 @@
# Aardvark Integration — How It Works
A plain-language walkthrough of every piece and how they connect.
---
## The Big Picture
```
┌──────────────────────────────────────────────────────────────┐
│ STARTUP (boot) │
│ │
│ 1. Ask aardvark-sys: "any adapters plugged in?" │
│ 2. For each one found → register a device + transport │
│ 3. Load tools only if hardware was found │
└──────────────────────────────────────────┬───────────────────┘
┌──────────────────────▼──────────────────────┐
│ RUNTIME (agent loop) │
│ │
│ User: "scan i2c bus" │
│ → agent calls i2c_scan tool │
│ → tool builds a ZcCommand │
│ → AardvarkTransport sends to hardware │
│ → response flows back as text │
└──────────────────────────────────────────────┘
```
---
## Layer by Layer
### Layer 1 — `aardvark-sys` (the USB talker)
**File:** `crates/aardvark-sys/src/lib.rs`
This is the only layer that ever touches the raw C library.
Think of it as a thin translator: it turns C function calls into safe Rust.
**Algorithm:**
```
find_devices()
→ call aa_find_devices(16, buf) // ask C lib how many adapters
→ return Vec of port numbers // [0, 1, ...] one per adapter
open_port(port)
→ call aa_open(port) // open that specific adapter
→ if handle ≤ 0, return OpenFailed
→ else return AardvarkHandle{ _port: handle }
i2c_scan(handle)
→ for addr in 0x08..=0x77 // every valid 7-bit address
try aa_i2c_read(addr, 1 byte) // knock on the door
if ACK → add to list // device answered
→ return list of live addresses
i2c_read(handle, addr, len)
→ aa_i2c_read(addr, len bytes)
→ return bytes as Vec<u8>
i2c_write(handle, addr, data)
→ aa_i2c_write(addr, data)
spi_transfer(handle, bytes_to_send)
→ aa_spi_write(bytes) // full-duplex: sends + receives
→ return received bytes
gpio_set(handle, direction, value)
→ aa_gpio_direction(direction) // which pins are outputs
→ aa_gpio_put(value) // set output levels
gpio_get(handle)
→ aa_gpio_get() // read all pin levels as bitmask
Drop(handle)
→ aa_close(handle._port) // always close on drop
```
**In stub mode** (no SDK): every method returns `Err(NotFound)` immediately. `find_devices()` returns `[]`. Nothing crashes.
---
### Layer 2 — `AardvarkTransport` (the bridge)
**File:** `src/hardware/aardvark.rs`
The rest of ZeroClaw speaks a single language: `ZcCommand``ZcResponse`.
`AardvarkTransport` translates between that protocol and the aardvark-sys calls above.
**Algorithm:**
```
send(ZcCommand) → ZcResponse
extract command name from cmd.name
extract parameters from cmd.params (serde_json values)
match cmd.name:
"i2c_scan" → open handle → call i2c_scan()
→ format found addresses as hex list
→ return ZcResponse{ output: "0x48, 0x68" }
"i2c_read" → parse addr (hex string) + len (number)
→ open handle → i2c_enable(bitrate)
→ call i2c_read(addr, len)
→ format bytes as hex
→ return ZcResponse{ output: "0xAB 0xCD" }
"i2c_write" → parse addr + data bytes
→ open handle → i2c_write(addr, data)
→ return ZcResponse{ output: "ok" }
"spi_transfer" → parse bytes_hex string → decode to Vec<u8>
→ open handle → spi_enable(bitrate)
→ spi_transfer(bytes)
→ return received bytes as hex
"gpio_set" → parse direction + value bitmasks
→ open handle → gpio_set(dir, val)
→ return ZcResponse{ output: "ok" }
"gpio_get" → open handle → gpio_get()
→ return bitmask value as string
on any AardvarkError → return ZcResponse{ error: "..." }
```
**Key design choice — lazy open:** The handle is opened fresh for every command and dropped at the end. This means no held connection, no state to clean up, and no "is it still open?" logic anywhere.
---
### Layer 3 — Tools (what the agent calls)
**File:** `src/hardware/aardvark_tools.rs`
Each tool is a thin wrapper. It:
1. Validates the agent's JSON input
2. Resolves which physical device to use
3. Builds a `ZcCommand`
4. Calls `AardvarkTransport.send()`
5. Returns the result as text
```
I2cScanTool.call(args)
→ look up "device" in args (default: "aardvark0")
→ find that device in the registry
→ build ZcCommand{ name: "i2c_scan", params: {} }
→ send to AardvarkTransport
→ return "Found: 0x48, 0x68" (or "No devices found")
I2cReadTool.call(args)
→ require args["addr"] and args["len"]
→ build ZcCommand{ name: "i2c_read", params: {addr, len} }
→ send → return hex bytes
I2cWriteTool.call(args)
→ require args["addr"] and args["data"] (hex or array)
→ build ZcCommand{ name: "i2c_write", params: {addr, data} }
→ send → return "ok" or error
SpiTransferTool.call(args)
→ require args["bytes"] (hex string)
→ build ZcCommand{ name: "spi_transfer", params: {bytes} }
→ send → return received bytes
GpioAardvarkTool.call(args)
→ require args["direction"] + args["value"] (set)
OR no extra args (get)
→ build appropriate ZcCommand
→ send → return result
DatasheetTool.call(args)
→ action = args["action"]: "search" | "download" | "list" | "read"
→ "search": return a Google/vendor search URL for the device
→ "download": fetch PDF from args["url"] → save to ~/.zeroclaw/hardware/datasheets/
→ "list": scan the datasheets directory → return filenames
→ "read": open a saved PDF and return its text
```
---
### Layer 4 — Device Registry (the address book)
**File:** `src/hardware/device.rs`
The registry is a runtime map of every connected device.
Each entry stores: alias, kind, capabilities, transport handle.
```
register("aardvark", vid=0x2b76, ...)
→ DeviceKind::from_vid(0x2b76) → DeviceKind::Aardvark
→ DeviceRuntime::from_kind() → DeviceRuntime::Aardvark
→ assign alias "aardvark0" (then "aardvark1" for second, etc.)
→ store entry in HashMap
attach_transport("aardvark0", AardvarkTransport, capabilities{i2c,spi,gpio})
→ store Arc<dyn Transport> in the entry
has_aardvark()
→ any entry where kind == Aardvark → true / false
resolve_aardvark_device(args)
→ read "device" param (default: "aardvark0")
→ look up alias in HashMap
→ return (alias, DeviceContext{ transport, capabilities })
```
---
### Layer 5 — `boot()` (startup wiring)
**File:** `src/hardware/mod.rs`
`boot()` runs once at startup. For Aardvark:
```
boot()
...
aardvark_ports = aardvark_sys::AardvarkHandle::find_devices()
// → [] in stub mode, [0] if one adapter is plugged in
for (i, port) in aardvark_ports:
alias = registry.register("aardvark", vid=0x2b76, ...)
// → "aardvark0", "aardvark1", ...
transport = AardvarkTransport::new(port, bitrate=100kHz)
registry.attach_transport(alias, transport, {i2c:true, spi:true, gpio:true})
log "[registry] aardvark0 ready → Total Phase port 0"
...
```
---
### Layer 6 — Tool Registry (the loader)
**File:** `src/hardware/tool_registry.rs`
After `boot()`, the tool registry checks what hardware is present and loads
only the relevant tools:
```
ToolRegistry::load(devices)
# always loaded (Pico / GPIO)
register: gpio_write, gpio_read, gpio_toggle, pico_flash, device_list, device_status
# only loaded if an Aardvark was found at boot
if devices.has_aardvark():
register: i2c_scan, i2c_read, i2c_write, spi_transfer, gpio_aardvark, datasheet
```
This is why the `hardware_feature_registers_all_six_tools` test still passes in stub mode — `has_aardvark()` returns false, 0 extra tools load, count stays at 6.
---
## Full Flow Diagram
```
SDK FILES aardvark-sys ZeroClaw core
(vendor/) (crates/) (src/)
─────────────────────────────────────────────────────────────────
aardvark.h ──► build.rs boot()
aardvark.so (bindgen) ──► find_devices()
│ │
bindings.rs │ vec![0] (one adapter)
│ ▼
lib.rs register("aardvark0")
AardvarkHandle attach_transport(AardvarkTransport)
│ │
│ ▼
│ ToolRegistry::load()
│ has_aardvark() == true
│ → load 6 aardvark tools
─────────────────────────────────────────────────────────────────
USER MESSAGE: "scan the i2c bus"
agent loop
I2cScanTool.call()
resolve_aardvark_device("aardvark0")
│ returns transport Arc
AardvarkTransport.send(ZcCommand{ name: "i2c_scan" })
AardvarkHandle::open_port(0) ← opens USB connection
aa_i2c_read(0x08..0x77) ← probes each address
AardvarkHandle dropped ← USB connection closed
ZcResponse{ output: "Found: 0x48, 0x68" }
agent sends reply to user: "I found two I2C devices: 0x48 and 0x68"
```
---
## Stub vs Real Side by Side
| | Stub mode (now) | Real hardware |
|---|---|---|
| `find_devices()` | returns `[]` | returns `[0]` |
| `open_port(0)` | `Err(NotFound)` | opens USB, returns handle |
| `i2c_scan()` | `[]` | probes bus, returns addresses |
| tools loaded | only the 6 Pico tools | 6 Pico + 6 Aardvark tools |
| `has_aardvark()` | `false` | `true` |
| SDK needed | no | yes (`vendor/aardvark.h` + `.so`) |
The only code that changes when you plug in real hardware is inside
`crates/aardvark-sys/src/lib.rs` — every other layer is already wired up
and waiting.
+15 -1
View File
@@ -4,8 +4,22 @@ Localized documentation trees live here and under `docs/`.
## Locales
- العربية (Arabic): [ar/README.md](ar/README.md)
- বাংলা (Bengali): [bn/README.md](bn/README.md)
- Deutsch (German): [de/README.md](de/README.md)
- Ελληνικά (Greek): [el/README.md](el/README.md)
- Español (Spanish): [es/README.md](es/README.md)
- Français (French): [fr/README.md](fr/README.md)
- हिन्दी (Hindi): [hi/README.md](hi/README.md)
- Italiano (Italian): [it/README.md](it/README.md)
- 日本語 (Japanese): [ja/README.md](ja/README.md)
- 한국어 (Korean): [ko/README.md](ko/README.md)
- Português (Portuguese): [pt/README.md](pt/README.md)
- Русский (Russian): [ru/README.md](ru/README.md)
- Tagalog: [tl/README.md](tl/README.md)
- Tiếng Việt (Vietnamese): [vi/README.md](vi/README.md)
- Vietnamese (canonical): [`docs/vi/`](../vi/)
- Chinese (Simplified): [`docs/i18n/zh-CN/`](zh-CN/)
- 简体中文 (Chinese): [zh-CN/README.md](zh-CN/README.md)
## Structure
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (Arabic)
This locale hub is enabled for Arabic community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- Arabic docs hub: [README.md](README.md)
- Arabic summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [../ko/README.md](../ko/README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [../tl/README.md](../tl/README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [../de/README.md](../de/README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [README.md](README.md)
- हिन्दी: [../hi/README.md](../hi/README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [../bn/README.md](../bn/README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (Arabic)
This is the Arabic locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- Arabic docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (Bengali)
This locale hub is enabled for Bengali community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- Bengali docs hub: [README.md](README.md)
- Bengali summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [../ko/README.md](../ko/README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [../tl/README.md](../tl/README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [../de/README.md](../de/README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [../ar/README.md](../ar/README.md)
- हिन्दी: [../hi/README.md](../hi/README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [README.md](README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (Bengali)
This is the Bengali locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- Bengali docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (German)
This locale hub is enabled for German community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- German docs hub: [README.md](README.md)
- German summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [../ko/README.md](../ko/README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [../tl/README.md](../tl/README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [README.md](README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [../ar/README.md](../ar/README.md)
- हिन्दी: [../hi/README.md](../hi/README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [../bn/README.md](../bn/README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (German)
This is the German locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- German docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (Hindi)
This locale hub is enabled for Hindi community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- Hindi docs hub: [README.md](README.md)
- Hindi summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [../ko/README.md](../ko/README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [../tl/README.md](../tl/README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [../de/README.md](../de/README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [../ar/README.md](../ar/README.md)
- हिन्दी: [README.md](README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [../bn/README.md](../bn/README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (Hindi)
This is the Hindi locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- Hindi docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (Korean)
This locale hub is enabled for Korean community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- Korean docs hub: [README.md](README.md)
- Korean summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [README.md](README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [../tl/README.md](../tl/README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [../de/README.md](../de/README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [../ar/README.md](../ar/README.md)
- हिन्दी: [../hi/README.md](../hi/README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [../bn/README.md](../bn/README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (Korean)
This is the Korean locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- Korean docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
+35
View File
@@ -0,0 +1,35 @@
# ZeroClaw Documentation Hub (Tagalog)
This locale hub is enabled for Tagalog community support.
Last synchronized: **March 6, 2026**.
## Quick Links
- Tagalog docs hub: [README.md](README.md)
- Tagalog summary: [SUMMARY.md](SUMMARY.md)
- English docs hub: [../../README.md](../../README.md)
- English summary: [../../SUMMARY.md](../../SUMMARY.md)
## Coverage Status
Current status: **hub-level support enabled**. Full document translation is in progress.
## Other Languages
- English: [../../../README.md](../../../README.md)
- 简体中文: [../zh-CN/README.md](../zh-CN/README.md)
- 日本語: [../ja/README.md](../ja/README.md)
- 한국어: [../ko/README.md](../ko/README.md)
- Tiếng Việt: [../vi/README.md](../vi/README.md)
- Tagalog: [README.md](README.md)
- Español: [../es/README.md](../es/README.md)
- Português: [../pt/README.md](../pt/README.md)
- Italiano: [../it/README.md](../it/README.md)
- Deutsch: [../de/README.md](../de/README.md)
- Français: [../fr/README.md](../fr/README.md)
- العربية: [../ar/README.md](../ar/README.md)
- हिन्दी: [../hi/README.md](../hi/README.md)
- Русский: [../ru/README.md](../ru/README.md)
- বাংলা: [../bn/README.md](../bn/README.md)
- Ελληνικά: [../el/README.md](../el/README.md)
+20
View File
@@ -0,0 +1,20 @@
# ZeroClaw Docs Summary (Tagalog)
This is the Tagalog locale summary entry point.
Last synchronized: **March 6, 2026**.
## Entry Points
- Tagalog docs hub: [README.md](README.md)
- English docs hub: [../../README.md](../../README.md)
- English unified summary: [../../SUMMARY.md](../../SUMMARY.md)
## Operator References (English Source)
- [../../commands-reference.md](../../commands-reference.md)
- [../../config-reference.md](../../config-reference.md)
- [../../providers-reference.md](../../providers-reference.md)
- [../../channels-reference.md](../../channels-reference.md)
- [../../operations-runbook.md](../../operations-runbook.md)
- [../../troubleshooting.md](../../troubleshooting.md)
@@ -314,6 +314,46 @@ temperature = 0.2
- 使用精确域或子域匹配(例如 `"api.example.com"``"example.com"`),或 `"*"` 允许任何公共域。
- 即使配置了 `"*"`,本地/私有目标仍然被阻止。
## `[google_workspace]`
| 键 | 默认值 | 用途 |
|---|---|---|
| `enabled` | `false` | 启用 `google_workspace` 工具 |
| `credentials_path` | 未设置 | Google 服务账号或 OAuth 凭据 JSON 的路径 |
| `default_account` | 未设置 | 传递给 `gws``--account` 默认 Google 账号 |
| `allowed_services` | (内置列表) | 代理可访问的服务:`drive``gmail``calendar``sheets``docs``slides``tasks``people``chat``classroom``forms``keep``meet``events` |
| `rate_limit_per_minute` | `60` | 每分钟最大 `gws` 调用次数 |
| `timeout_secs` | `30` | 每次调用超时时间(秒) |
| `audit_log` | `false` | 为每次 `gws` 调用记录 `INFO` 日志 |
### `[[google_workspace.allowed_operations]]`
非空时,仅精确匹配的调用通过。当 `service``resource``sub_resource``method` 全部一致时,条目匹配。
为空时(默认),`allowed_services` 内的所有组合均可用。
| 键 | 是否必填 | 用途 |
|---|---|---|
| `service` | 是 | 服务标识符(须匹配 `allowed_services` 中的条目) |
| `resource` | 是 | 顶层资源名称(Gmail 为 `users`Drive 为 `files`Calendar 为 `events` |
| `sub_resource` | 否 | 4 段 gws 命令的子资源。Gmail 操作使用 `gws gmail users <sub_resource> <method>`,因此 Gmail 条目需填写 `sub_resource` 才能在运行时匹配。Drive、Calendar 等使用 3 段命令,省略此字段。 |
| `methods` | 是 | 该资源/子资源上允许的一个或多个方法名称 |
Gmail 所有操作使用 `gws gmail users <sub_resource> <method>` 格式。未填写 `sub_resource` 的 Gmail 条目在运行时将永远无法匹配。Drive 和 Calendar 使用 3 段命令,省略 `sub_resource`
```toml
[google_workspace]
enabled = true
default_account = "owner@company.com"
allowed_services = ["gmail"]
audit_log = true
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "drafts"
methods = ["list", "get", "create", "update"]
```
## `[gateway]`
| 键 | 默认值 | 用途 |
@@ -371,6 +411,30 @@ allowed_roots = [\"~/Desktop/projects\", \"/opt/shared-repo\"]
- 内存上下文注入忽略旧的 `assistant_resp*` 自动保存键,以防止旧模型生成的摘要被视为事实。
### `[memory.mem0]`
Mem0 (OpenMemory) 后端 — 连接自托管 mem0 服务器,提供基于向量的记忆存储和 LLM 事实提取。构建时需要 `memory-mem0` feature flag,配置需设置 `backend = "mem0"`
| 键 | 默认值 | 环境变量 | 用途 |
|---|---|---|---|
| `url` | `http://localhost:8765` | `MEM0_URL` | OpenMemory 服务器地址 |
| `user_id` | `zeroclaw` | `MEM0_USER_ID` | 记忆作用域的用户 ID |
| `app_name` | `zeroclaw` | `MEM0_APP_NAME` | 在 mem0 中注册的应用名称 |
| `infer` | `true` | — | 使用 LLM 从存储文本中提取事实 (`true`) 或原样存储 (`false`) |
| `extraction_prompt` | 未设置 | `MEM0_EXTRACTION_PROMPT` | 自定义 LLM 事实提取提示词(如适用于非英文内容) |
```toml
[memory]
backend = "mem0"
[memory.mem0]
url = "http://192.168.0.171:8765"
user_id = "zeroclaw-bot"
extraction_prompt = "用原始语言提取事实..."
```
服务器部署脚本位于 `deploy/mem0/`
## `[[model_routes]]``[[embedding_routes]]`
使用路由提示,以便集成可以在模型 ID 演变时保持稳定的名称。
+128 -2
View File
@@ -122,6 +122,34 @@ tools = ["mcp_browser_*"]
keywords = ["browse", "navigate", "open url", "screenshot"]
```
## `[pacing]`
Pacing controls for slow/local LLM workloads (Ollama, llama.cpp, vLLM). All keys are optional; when absent, existing behavior is preserved.
| Key | Default | Purpose |
|---|---|---|
| `step_timeout_secs` | _none_ | Per-step timeout: maximum seconds for a single LLM inference turn. Catches a truly hung model without terminating the overall task loop |
| `loop_detection_min_elapsed_secs` | _none_ | Minimum elapsed seconds before loop detection activates. Tasks completing under this threshold get aggressive loop protection; longer-running tasks receive a grace period |
| `loop_ignore_tools` | `[]` | Tool names excluded from identical-output loop detection. Useful for browser workflows where `browser_screenshot` structurally resembles a loop |
| `message_timeout_scale_max` | `4` | Override for the hardcoded timeout scaling cap. The channel message timeout budget is `message_timeout_secs * min(max_tool_iterations, message_timeout_scale_max)` |
Notes:
- These settings are intended for local/slow LLM deployments. Cloud-provider users typically do not need them.
- `step_timeout_secs` operates independently of the total channel message timeout budget. A step timeout abort does not consume the overall budget; the loop simply stops.
- `loop_detection_min_elapsed_secs` delays loop-detection counting, not the task itself. Loop protection remains fully active for short tasks (the default).
- `loop_ignore_tools` only suppresses tool-output-based loop detection for the listed tools. Other safety features (max iterations, overall timeout) remain active.
- `message_timeout_scale_max` must be >= 1. Setting it higher than `max_tool_iterations` has no additional effect (the formula uses `min()`).
- Example configuration for a slow local Ollama deployment:
```toml
[pacing]
step_timeout_secs = 120
loop_detection_min_elapsed_secs = 60
loop_ignore_tools = ["browser_screenshot", "browser_navigate"]
message_timeout_scale_max = 8
```
## `[security.otp]`
| Key | Default | Purpose |
@@ -185,12 +213,15 @@ Delegate sub-agent configurations. Each key under `[agents]` defines a named sub
| `max_iterations` | `10` | Max tool-call iterations for agentic mode |
| `timeout_secs` | `120` | Timeout in seconds for non-agentic provider calls (13600) |
| `agentic_timeout_secs` | `300` | Timeout in seconds for agentic sub-agent loops (13600) |
| `skills_directory` | unset | Optional skills directory path (workspace-relative) for scoped skill loading |
Notes:
- `agentic = false` preserves existing single prompt→response delegate behavior.
- `agentic = true` requires at least one matching entry in `allowed_tools`.
- The `delegate` tool is excluded from sub-agent allowlists to prevent re-entrant delegation loops.
- Sub-agents receive an enriched system prompt containing: tools section (allowed tools with parameters), skills section (from scoped or default directory), workspace path, current date/time, safety constraints, and shell policy when `shell` is in the effective tool list.
- When `skills_directory` is unset or empty, the sub-agent loads skills from the default workspace `skills/` directory. When set, skills are loaded exclusively from that directory (relative to workspace root), enabling per-agent scoped skill sets.
```toml
[agents.researcher]
@@ -208,6 +239,14 @@ provider = "ollama"
model = "qwen2.5-coder:32b"
temperature = 0.2
timeout_secs = 60
[agents.code_reviewer]
provider = "anthropic"
model = "claude-opus-4-5"
system_prompt = "You are an expert code reviewer focused on security and performance."
agentic = true
allowed_tools = ["file_read", "shell"]
skills_directory = "skills/code-review"
```
## `[runtime]`
@@ -349,6 +388,63 @@ Notes:
- Use exact domain or subdomain matching (e.g. `"api.example.com"`, `"example.com"`), or `"*"` to allow any public domain.
- Local/private targets are still blocked even when `"*"` is configured.
## `[google_workspace]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable the `google_workspace` tool |
| `credentials_path` | unset | Path to Google service account or OAuth credentials JSON |
| `default_account` | unset | Default Google account passed as `--account` to `gws` |
| `allowed_services` | (built-in list) | Services the agent may access: `drive`, `gmail`, `calendar`, `sheets`, `docs`, `slides`, `tasks`, `people`, `chat`, `classroom`, `forms`, `keep`, `meet`, `events` |
| `rate_limit_per_minute` | `60` | Maximum `gws` calls per minute |
| `timeout_secs` | `30` | Per-call execution timeout before kill |
| `audit_log` | `false` | Emit an `INFO` log line for every `gws` call |
### `[[google_workspace.allowed_operations]]`
When this array is non-empty, only exact matches pass. An entry matches a call when
`service`, `resource`, `sub_resource`, and `method` all agree. When the array is
empty (the default), all combinations within `allowed_services` are available.
| Key | Required | Purpose |
|---|---|---|
| `service` | yes | Service identifier (must match an entry in `allowed_services`) |
| `resource` | yes | Top-level resource name (`users` for Gmail, `files` for Drive, `events` for Calendar) |
| `sub_resource` | no | Sub-resource for 4-segment gws commands. Gmail operations use `gws gmail users <sub_resource> <method>`, so Gmail entries need `sub_resource` to match at runtime. Drive, Calendar, and most other services use 3-segment commands and omit it. |
| `methods` | yes | One or more method names allowed on that resource/sub_resource |
Gmail uses `gws gmail users <sub_resource> <method>` for all operations. A Gmail
entry without `sub_resource` will never match at runtime. Drive and Calendar use
3-segment commands and omit `sub_resource`.
```toml
[google_workspace]
enabled = true
default_account = "owner@company.com"
allowed_services = ["gmail"]
audit_log = true
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "messages"
methods = ["list", "get"]
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "drafts"
methods = ["list", "get", "create", "update"]
```
Notes:
- Requires `gws` to be installed and authenticated (`gws auth login`). Install: `npm install -g @googleworkspace/cli`.
- `credentials_path` sets `GOOGLE_APPLICATION_CREDENTIALS` before each call.
- `allowed_services` defaults to the built-in list if omitted or empty.
- Validation rejects duplicate `(service, resource)` pairs and duplicate methods within a single entry.
- See `docs/superpowers/specs/2026-03-19-google-workspace-operation-allowlist.md` for the full policy model and verified workflow examples.
## `[gateway]`
| Key | Default | Purpose |
@@ -357,6 +453,12 @@ Notes:
| `port` | `42617` | gateway listen port |
| `require_pairing` | `true` | require pairing before bearer auth |
| `allow_public_bind` | `false` | block accidental public exposure |
| `path_prefix` | _(none)_ | URL path prefix for reverse-proxy deployments (e.g. `"/zeroclaw"`) |
When deploying behind a reverse proxy that maps ZeroClaw to a sub-path,
set `path_prefix` to that sub-path (e.g. `"/zeroclaw"`). All gateway
routes will be served under this prefix. The value must start with `/`
and must not end with `/`.
## `[autonomy]`
@@ -406,6 +508,30 @@ Notes:
- Memory context injection ignores legacy `assistant_resp*` auto-save keys to prevent old model-authored summaries from being treated as facts.
### `[memory.mem0]`
Mem0 (OpenMemory) backend — connects to a self-hosted mem0 server for vector-based memory with LLM-powered fact extraction. Requires feature flag `memory-mem0` at build time and `backend = "mem0"` in config.
| Key | Default | Env var | Purpose |
|---|---|---|---|
| `url` | `http://localhost:8765` | `MEM0_URL` | OpenMemory server URL |
| `user_id` | `zeroclaw` | `MEM0_USER_ID` | User ID for scoping memories |
| `app_name` | `zeroclaw` | `MEM0_APP_NAME` | Application name registered in mem0 |
| `infer` | `true` | — | Use LLM to extract facts from stored text (`true`) or store raw (`false`) |
| `extraction_prompt` | unset | `MEM0_EXTRACTION_PROMPT` | Custom prompt for LLM fact extraction (e.g. for non-English content) |
```toml
[memory]
backend = "mem0"
[memory.mem0]
url = "http://192.168.0.171:8765"
user_id = "zeroclaw-bot"
extraction_prompt = "Extract facts in the original language..."
```
Server deployment scripts are in `deploy/mem0/`.
## `[[model_routes]]` and `[[embedding_routes]]`
Use route hints so integrations can keep stable names while model IDs evolve.
@@ -505,7 +631,7 @@ Top-level channel options are configured under `channels_config`.
| Key | Default | Purpose |
|---|---|---|
| `message_timeout_secs` | `300` | Base timeout in seconds for channel message processing; runtime scales this with tool-loop depth (up to 4x) |
| `message_timeout_secs` | `300` | Base timeout in seconds for channel message processing; runtime scales this with tool-loop depth (up to 4x, overridable via `[pacing].message_timeout_scale_max`) |
Examples:
@@ -520,7 +646,7 @@ Examples:
Notes:
- Default `300s` is optimized for on-device LLMs (Ollama) which are slower than cloud APIs.
- Runtime timeout budget is `message_timeout_secs * scale`, where `scale = min(max_tool_iterations, 4)` and a minimum of `1`.
- Runtime timeout budget is `message_timeout_secs * scale`, where `scale = min(max_tool_iterations, cap)` and a minimum of `1`. The default cap is `4`; override with `[pacing].message_timeout_scale_max`.
- This scaling avoids false timeouts when the first LLM turn is slow/retried but later tool-loop turns still need to complete.
- If using cloud APIs (OpenAI, Anthropic, etc.), you can reduce this to `60` or lower.
- Values below `30` are clamped to `30` to avoid immediate timeout churn.
+2 -1
View File
@@ -2,7 +2,7 @@
This document maps provider IDs, aliases, and credential environment variables.
Last verified: **February 21, 2026**.
Last verified: **March 12, 2026**.
## How to List Providers
@@ -62,6 +62,7 @@ credential is not reused for fallback providers.
| `vllm` | — | Yes | `VLLM_API_KEY` (optional) |
| `osaurus` | — | Yes | `OSAURUS_API_KEY` (optional; defaults to `"osaurus"`) |
| `nvidia` | `nvidia-nim`, `build.nvidia.com` | No | `NVIDIA_API_KEY` |
| `avian` | — | No | `AVIAN_API_KEY` |
### Vercel AI Gateway Notes
@@ -0,0 +1,281 @@
# Google Workspace Operation Allowlist
Date: 2026-03-19
Status: Implemented
Scope: `google_workspace` wrapper only
## Problem
The current `google_workspace` tool scopes access only at the service level.
If `gmail` is allowed, the agent can request any Gmail resource and method that
`gws` and the credential authorize. That is too broad for supervised workflows
such as "read and draft, but never send."
This creates a gap between:
- tool-level safety expectations in first-party skills such as `email-assistant`
- actual runtime enforcement in the ZeroClaw wrapper
## Current State
The current wrapper supports:
- `allowed_services`
- `credentials_path`
- `default_account`
- rate limiting
- timeout
- audit logging
It does not currently support:
- declared credential profiles for `google_workspace`
- startup verification of granted OAuth scopes
- separate credential files per trust tier as a first-class config concept
## Goals
- Add a method-level allowlist to the ZeroClaw `google_workspace` wrapper.
- Preserve backward compatibility for existing configs.
- Fail closed when an operation is outside the configured allowlist.
- Make Gmail-native draft workflows possible without exposing send methods in the wrapper.
## Non-Goals
This slice does not attempt to solve credential-level policy gaps in Gmail OAuth.
Specifically, it does not add:
- OAuth scope introspection at startup
- credential profile declarations
- trust-tier routing across multiple credential files
- dynamic operation discovery
Those are valid follow-on items, but they are separate features.
## Proposed Config
Gmail uses a 4-segment gws command shape (`gws gmail users <sub_resource> <method>`),
so `sub_resource` is required for all Gmail entries. Drive and Calendar use
3-segment commands and omit `sub_resource`.
```toml
[google_workspace]
enabled = true
default_account = "owner@company.com"
allowed_services = ["gmail"]
audit_log = true
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "messages"
methods = ["list", "get"]
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "threads"
methods = ["get"]
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "drafts"
methods = ["list", "get", "create", "update"]
```
Semantics:
- If `allowed_operations` is empty, behavior stays backward compatible:
all resource/method combinations remain available within `allowed_services`.
- If `allowed_operations` is non-empty, only exact matches pass. An entry matches
a call when `service`, `resource`, `sub_resource`, and `method` all agree.
`sub_resource` in the entry is optional: an entry without `sub_resource` matches
only calls with no sub_resource; an entry with `sub_resource` matches only calls
with that exact sub_resource value.
- Service-level and operation-level checks both apply.
## Operation Inventory Reference
The first question operators need answered is not "where is the canonical API
inventory?" It is "what string values are valid here?"
For `allowed_operations`, the runtime expects `service`, `resource`, an optional
`sub_resource`, and `methods`. The values come directly from the `gws` command
segments in the same order.
3-segment commands (Drive, Calendar, Sheets, etc.):
```text
gws <service> <resource> <method> ...
```
```toml
[[google_workspace.allowed_operations]]
service = "<service>"
resource = "<resource>"
# sub_resource omitted
methods = ["<method>"]
```
4-segment commands (Gmail and other user-scoped APIs):
```text
gws <service> <resource> <sub_resource> <method> ...
```
```toml
[[google_workspace.allowed_operations]]
service = "<service>"
resource = "<resource>"
sub_resource = "<sub_resource>"
methods = ["<method>"]
```
Examples verified against `gws` discovery output:
| CLI shape | Config entry |
|---|---|
| `gws gmail users messages list` | `service = "gmail"`, `resource = "users"`, `sub_resource = "messages"`, `method = "list"` |
| `gws gmail users drafts create` | `service = "gmail"`, `resource = "users"`, `sub_resource = "drafts"`, `method = "create"` |
| `gws calendar events list` | `service = "calendar"`, `resource = "events"`, `method = "list"` |
| `gws drive files get` | `service = "drive"`, `resource = "files"`, `method = "get"` |
Verified starter examples for common supervised workflows:
- Gmail read-only triage:
- `gmail/users/messages/list`
- `gmail/users/messages/get`
- `gmail/users/threads/list`
- `gmail/users/threads/get`
- Gmail draft-without-send:
- `gmail/users/drafts/list`
- `gmail/users/drafts/get`
- `gmail/users/drafts/create`
- `gmail/users/drafts/update`
- Calendar review:
- `calendar/events/list`
- `calendar/events/get`
- Calendar scheduling:
- `calendar/events/list`
- `calendar/events/get`
- `calendar/events/insert`
- `calendar/events/update`
- Drive lookup:
- `drive/files/list`
- `drive/files/get`
- Drive metadata and sharing review:
- `drive/files/list`
- `drive/files/get`
- `drive/files/update`
- `drive/permissions/list`
Important constraint:
- This spec intentionally documents the value shape and a small set of verified
common examples.
- It does not attempt to freeze a complete global list of every Google
Workspace operation, because the underlying `gws` command surface is derived
from Google's Discovery Service and can evolve over time.
When you need to confirm whether a less-common operation exists:
- Use the Google Workspace CLI docs as the operator-facing entry point:
`https://googleworkspace-cli.mintlify.app/`
- Use the Google API Discovery directory to identify the relevant API:
`https://developers.google.com/discovery/v1/reference/apis/list`
- Use the per-service Discovery document or REST reference to confirm the exact
resource and method names for that API.
## Runtime Enforcement
Validation order inside `google_workspace`:
1. Extract `service`, `resource`, `method` from args (required).
2. Extract and validate `sub_resource` if present (type check, character check).
3. Check rate limits.
4. Check `service` against `allowed_services`.
5. Check `(service, resource, sub_resource, method)` against `allowed_operations`
when configured. Unmatched combinations are denied fail-closed.
6. Validate `service`, `resource`, and `method` for shell-safe characters.
7. Build optional args (`params`, `body`, `format`, `page_all`, `page_limit`).
8. Charge action budget (only after all validation passes).
9. Execute the `gws` command.
This must be fail-closed. A missing operation match is a hard deny, not a warning.
## Data Model
Config type:
```rust
pub struct GoogleWorkspaceAllowedOperation {
pub service: String,
pub resource: String,
pub sub_resource: Option<String>,
pub methods: Vec<String>,
}
```
Added to `GoogleWorkspaceConfig`:
```rust
pub allowed_operations: Vec<GoogleWorkspaceAllowedOperation>
```
## Validation Rules
- `service` must be non-empty, lowercase alphanumeric with `_` or `-`
- `resource` must be non-empty, lowercase alphanumeric with `_` or `-`
- `sub_resource`, when present, must be non-empty, lowercase alphanumeric with `_` or `-`
- `methods` must be non-empty
- each method must be non-empty, lowercase alphanumeric with `_` or `-`
- duplicate methods within one entry are rejected by validation
- duplicate `(service, resource, sub_resource)` entries are rejected by validation
## TDD Plan
1. Add config validation tests for invalid `allowed_operations`.
2. Add tool tests for allow-all fallback when `allowed_operations` is empty.
3. Add tool tests for exact allowlist matching.
4. Add tool tests that deny unlisted operations such as `gmail/users/drafts/send`.
5. Implement the config model and runtime checks.
6. Update docs with the new config shape and the Gmail draft-only pattern.
## Example Use Case
For `email-assistant`, the safe Gmail-native draft profile is:
```toml
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "messages"
methods = ["list", "get"]
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "threads"
methods = ["get"]
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "drafts"
methods = ["list", "get", "create", "update"]
```
Operations denied by omission: `gmail/users/messages/send`, `gmail/users/drafts/send`.
This is not a credential-level send prohibition. It is a runtime boundary inside
the ZeroClaw wrapper.
## Follow-On Work
Future credential-hardening work tracked separately:
1. Declared credential profiles in `google_workspace` config.
2. Startup verification of granted scopes against declared policy.
3. Multiple credential files per trust tier.
4. Optional profile-to-operation binding.
+63
View File
@@ -252,6 +252,45 @@ Lưu ý:
- Mặc định từ chối tất cả: nếu `allowed_domains` rỗng, mọi yêu cầu HTTP bị từ chối.
- Dùng khớp tên miền chính xác hoặc subdomain (ví dụ `"api.example.com"`, `"example.com"`).
## `[google_workspace]`
| Key | Default | Purpose |
|---|---|---|
| `enabled` | `false` | Enable the `google_workspace` tool |
| `credentials_path` | unset | Path to Google service account or OAuth credentials JSON |
| `default_account` | unset | Default Google account passed as `--account` to `gws` |
| `allowed_services` | (built-in list) | Services the agent may access: `drive`, `gmail`, `calendar`, `sheets`, `docs`, `slides`, `tasks`, `people`, `chat`, `classroom`, `forms`, `keep`, `meet`, `events` |
| `rate_limit_per_minute` | `60` | Maximum `gws` calls per minute |
| `timeout_secs` | `30` | Per-call execution timeout before kill |
| `audit_log` | `false` | Emit an `INFO` log line for every `gws` call |
### `[[google_workspace.allowed_operations]]`
When non-empty, only exact matches pass. An entry matches a call when `service`,
`resource`, `sub_resource`, and `method` all agree. When empty (the default), all
combinations within `allowed_services` are available.
| Key | Required | Purpose |
|---|---|---|
| `service` | yes | Service identifier (must match an entry in `allowed_services`) |
| `resource` | yes | Top-level resource name (`users` for Gmail, `files` for Drive, `events` for Calendar) |
| `sub_resource` | no | Sub-resource for 4-segment gws commands. Gmail operations use `gws gmail users <sub_resource> <method>`, so Gmail entries need `sub_resource` to match at runtime. Drive, Calendar, and most other services omit it. |
| `methods` | yes | One or more method names allowed on that resource/sub_resource |
```toml
[google_workspace]
enabled = true
default_account = "owner@company.com"
allowed_services = ["gmail"]
audit_log = true
[[google_workspace.allowed_operations]]
service = "gmail"
resource = "users"
sub_resource = "drafts"
methods = ["list", "get", "create", "update"]
```
## `[gateway]`
| Khóa | Mặc định | Mục đích |
@@ -298,6 +337,30 @@ Lưu ý:
- Chèn ngữ cảnh memory bỏ qua khóa auto-save `assistant_resp*` kiểu cũ để tránh tóm tắt do model tạo bị coi là sự thật.
### `[memory.mem0]`
Backend Mem0 (OpenMemory) — kết nối đến server mem0 tự host, cung cấp bộ nhớ vector với trích xuất sự kiện bằng LLM. Cần feature flag `memory-mem0` khi build và `backend = "mem0"` trong config.
| Khóa | Mặc định | Biến môi trường | Mục đích |
|---|---|---|---|
| `url` | `http://localhost:8765` | `MEM0_URL` | URL server OpenMemory |
| `user_id` | `zeroclaw` | `MEM0_USER_ID` | User ID để phân vùng memory |
| `app_name` | `zeroclaw` | `MEM0_APP_NAME` | Tên ứng dụng đăng ký trong mem0 |
| `infer` | `true` | — | Dùng LLM trích xuất sự kiện từ text (`true`) hoặc lưu nguyên (`false`) |
| `extraction_prompt` | chưa đặt | `MEM0_EXTRACTION_PROMPT` | Prompt tùy chỉnh cho trích xuất sự kiện LLM (vd: cho nội dung không phải tiếng Anh) |
```toml
[memory]
backend = "mem0"
[memory.mem0]
url = "http://192.168.0.171:8765"
user_id = "zeroclaw-bot"
extraction_prompt = "Trích xuất sự kiện bằng ngôn ngữ gốc..."
```
Script triển khai server nằm trong `deploy/mem0/`.
## `[[model_routes]]``[[embedding_routes]]`
Route hint giúp tên tích hợp ổn định khi model ID thay đổi.
+1
View File
@@ -54,6 +54,7 @@ Với chuỗi provider dự phòng (`reliability.fallback_providers`), mỗi pro
| `copilot` | `github-copilot` | Không | (dùng config/`API_KEY` fallback với GitHub token) |
| `lmstudio` | `lm-studio` | Có | (tùy chọn; mặc định là cục bộ) |
| `nvidia` | `nvidia-nim`, `build.nvidia.com` | Không | `NVIDIA_API_KEY` |
| `avian` | — | Không | `AVIAN_API_KEY` |
### Ghi chú về Gemini
@@ -0,0 +1,34 @@
## Aardvark Adapter (aardvark0)
- Protocol: I2C and SPI via Total Phase Aardvark USB
- Bitrate: 100 kHz (standard-mode I2C) by default
- Use `i2c_scan` first to discover connected devices
- Use `i2c_read` / `i2c_write` for register operations
- Use `spi_transfer` for full-duplex SPI
- Use `gpio_aardvark` to control the Aardvark's GPIO expansion pins
- Use `datasheet` tool when user identifies a new device
## Tool Selection — Aardvark
| Goal | Tool |
|--------------------------------|-----------------|
| Find devices on the I2C bus | `i2c_scan` |
| Read a register | `i2c_read` |
| Write a register | `i2c_write` |
| Full-duplex SPI transfer | `spi_transfer` |
| Control Aardvark GPIO pins | `gpio_aardvark` |
| User names a new device | `datasheet` |
## I2C Workflow
1. Run `i2c_scan` — find what addresses respond.
2. User identifies the device (or look up the address in the skill file).
3. Read the relevant register with `i2c_read`.
4. If datasheet is not yet cached, use `datasheet(action="search", device_name="...")`.
## Notes
- Aardvark has no firmware — it calls the C library directly.
Do NOT use `device_exec`, `device_read_code`, or `device_write_code` for Aardvark.
- The Aardvark adapter auto-enables I2C pull-ups (3.3 V) — no external resistors needed
for most sensors.
@@ -0,0 +1,41 @@
# aardvark0 — <Device Name> (<Part Number>)
<!-- Copy this file to ~/.zeroclaw/hardware/devices/aardvark0.md -->
<!-- Fill in the device details from the datasheet. -->
## Connection
- Adapter: Total Phase Aardvark (aardvark0)
- Protocol: I2C <!-- or SPI -->
- I2C Address: 0x48 <!-- change to the actual device address -->
- Bitrate: 100 kHz
## Key Registers (from datasheet)
<!-- Example for LM75 temperature sensor — replace with your device -->
| Register | Address | Description | Notes |
|----------|---------|----------------------------------------|------------------------|
| Temp | 0x00 | Temperature (2 bytes, big-endian) | MSB × 0.5 °C per LSB |
| Config | 0x01 | Configuration register | Read/write |
| Thyst | 0x02 | Hysteresis temperature | Read/write |
| Tos | 0x03 | Overtemperature shutdown threshold | Read/write |
## Datasheet
- File: `~/.zeroclaw/hardware/datasheets/<device>.pdf`
- Source: <!-- URL where you downloaded the datasheet -->
## Verified Working Commands
```python
# Read temperature from LM75 at I2C address 0x48, register 0x00
i2c_read(addr=0x48, register=0x00, len=2)
# Convert two bytes to °C:
# raw = (byte[0] << 1) | (byte[1] >> 7)
# temp = raw * 0.5 (if byte[0] bit 7 is 1, it's negative: raw - 256)
```
## Notes
<!-- Add any device-specific quirks, power-on sequences, or gotchas here -->
+63
View File
@@ -0,0 +1,63 @@
# Skill: I2C Operations via Aardvark
<!-- Copy to ~/.zeroclaw/hardware/skills/i2c.md -->
## Always scan first
If the I2C address is unknown, run `i2c_scan` before anything else.
## Common device addresses
| Address range | Typical devices |
|---------------|-----------------------------------------------|
| 0x080x0F | Reserved / rare |
| 0x400x4F | LM75, TMP102, HTU21D (temp/humidity) |
| 0x480x4F | LM75, DS1621, ADS1115 (ADC) |
| 0x500x57 | AT24Cxx EEPROM |
| 0x680x6F | MPU6050 IMU, DS1307 / DS3231 RTC |
| 0x760x77 | BME280, BMP280 (pressure + humidity) |
| 0x42 | Common PSoC6 default |
| 0x3C, 0x3D | SSD1306 OLED display |
## Reading a register
```text
i2c_read(addr=0x48, register=0x00, len=2)
```
## Writing a register
```text
i2c_write(addr=0x48, bytes=[0x01, 0x60])
```
## Write-then-read (register pointer pattern)
Some devices require you to first write the register address, then read separately:
```text
i2c_write(addr=0x48, bytes=[0x00])
i2c_read(addr=0x48, len=2)
```
The `i2c_read` tool handles this automatically when you specify `register=`.
## Temperature conversion — LM75 / TMP102
Raw bytes from register 0x00 are big-endian, 9-bit or 11-bit:
```
raw = (byte[0] << 1) | (byte[1] >> 7) # for LM75 (9-bit)
if raw >= 256: raw -= 512 # handle negative (two's complement)
temp_c = raw * 0.5
```
## Decision table — Aardvark vs Pico tools
| Scenario | Use |
|------------------------------------------------|---------------|
| Talking to an I2C sensor via Aardvark | `i2c_read` |
| Configuring a sensor register | `i2c_write` |
| Discovering what's on the bus | `i2c_scan` |
| Running MicroPython on the connected Pico | `device_exec` |
| Blinking Pico LED | `device_exec` |
+2
View File
@@ -88,6 +88,7 @@ checksum = "8ec610d8f49840a5b376c69663b6369e71f4b34484b9b2eb29fb918d92516cb9"
dependencies = [
"bare-metal",
"bitfield",
"critical-section",
"embedded-hal 0.2.7",
"volatile-register",
]
@@ -837,6 +838,7 @@ dependencies = [
name = "nucleo"
version = "0.1.0"
dependencies = [
"cortex-m",
"cortex-m-rt",
"critical-section",
"defmt 1.0.1",
+5 -3
View File
@@ -7,6 +7,8 @@
# Flash: probe-rs run --chip STM32F401RETx target/thumbv7em-none-eabihf/release/nucleo
# Or: zeroclaw peripheral flash-nucleo
[workspace]
[package]
name = "nucleo"
version = "0.1.0"
@@ -18,12 +20,13 @@ description = "ZeroClaw Nucleo-F401RE peripheral firmware — GPIO over JSON ser
embassy-executor = { version = "0.9", features = ["arch-cortex-m", "executor-thread", "defmt"] }
embassy-stm32 = { version = "0.5", features = ["defmt", "stm32f401re", "unstable-pac", "memory-x", "time-driver-tim4", "exti"] }
embassy-time = { version = "0.5", features = ["defmt", "defmt-timestamp-uptime", "tick-hz-32_768"] }
cortex-m = { version = "0.7", features = ["inline-asm", "critical-section-single-core"] }
cortex-m-rt = "0.7"
defmt = "1.0"
defmt-rtt = "1.0"
panic-probe = { version = "1.0", features = ["print-defmt"] }
heapless = { version = "0.9", default-features = false }
critical-section = "1.1"
cortex-m-rt = "0.7"
[package.metadata.embassy]
build = [
@@ -34,6 +37,5 @@ build = [
opt-level = "s"
lto = true
codegen-units = 1
strip = true
panic = "abort"
debug = 1
debug = 2
+2
View File
@@ -0,0 +1,2 @@
# ZeroClaw Pico firmware — serial protocol handler
# Placeholder: replace with actual MicroPython firmware for Pico deployment
Binary file not shown.
@@ -0,0 +1,3 @@
[target.thumbv7em-none-eabihf]
rustflags = ["-C", "link-arg=-Tlink.x", "-C", "link-arg=-Tdefmt.x"]
runner = "probe-rs run --chip STM32F401RETx"
+62 -20
View File
@@ -568,6 +568,31 @@ then re-run bootstrap.
MSG
exit 0
fi
# Detect un-accepted Xcode/CLT license (causes `cc` to exit 69).
# xcrun --show-sdk-path can succeed even without an accepted license,
# so we test-compile a trivial C file which reliably triggers the error.
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
warn "Xcode/CLT license has not been accepted. Attempting to accept it now..."
_xcode_accept_ok=false
if [[ "$(id -u)" -eq 0 ]]; then
xcodebuild -license accept && _xcode_accept_ok=true
elif [[ -c /dev/tty ]] && have_cmd sudo; then
sudo xcodebuild -license accept < /dev/tty && _xcode_accept_ok=true
fi
if [[ "$_xcode_accept_ok" == true ]]; then
step_ok "Xcode license accepted"
else
error "Could not accept Xcode license. Run manually:"
error " sudo xcodebuild -license accept"
error "then re-run this installer."
exit 1
fi
else
rm -f "$_xcode_test_file"
fi
if ! have_cmd git; then
warn "git is not available. Install git (e.g., Homebrew) and re-run bootstrap."
fi
@@ -1168,6 +1193,43 @@ else
install_system_deps
fi
# Always check Xcode/CLT license on macOS, regardless of --install-system-deps.
# An un-accepted license causes `cc` to exit 69, breaking all Rust builds.
if [[ "$OS_NAME" == "Darwin" ]]; then
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
warn "Xcode/CLT license has not been accepted. Attempting to accept it now..."
# Use /dev/tty so sudo can prompt for a password even in a curl|bash pipe.
_xcode_accept_ok=false
if [[ "$(id -u)" -eq 0 ]]; then
xcodebuild -license accept && _xcode_accept_ok=true
elif [[ -c /dev/tty ]] && have_cmd sudo; then
sudo xcodebuild -license accept < /dev/tty && _xcode_accept_ok=true
fi
if [[ "$_xcode_accept_ok" == true ]]; then
step_ok "Xcode license accepted"
# Re-test compilation to confirm it's fixed.
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
error "C compiler still failing after license accept. Check your Xcode/CLT installation."
exit 1
fi
rm -f "$_xcode_test_file"
else
error "Could not accept Xcode license. Run manually:"
error " sudo xcodebuild -license accept"
error "then re-run this installer."
exit 1
fi
else
rm -f "$_xcode_test_file"
fi
fi
if [[ "$INSTALL_RUST" == true ]]; then
install_rust_toolchain
fi
@@ -1460,25 +1522,6 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
if "$ZEROCLAW_BIN" service restart 2>/dev/null; then
step_ok "Gateway service restarted"
# Fetch and display pairing code from running gateway
PAIR_CODE=""
for i in 1 2 3 4 5; do
sleep 2
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
break
fi
done
if [[ -n "$PAIR_CODE" ]]; then
echo
echo -e " ${BOLD_BLUE}🔐 Gateway Pairing Code${RESET}"
echo
echo -e " ${BOLD_BLUE}┌──────────────┐${RESET}"
echo -e " ${BOLD_BLUE}${RESET} ${BOLD}${PAIR_CODE}${RESET} ${BOLD_BLUE}${RESET}"
echo -e " ${BOLD_BLUE}└──────────────┘${RESET}"
echo
echo -e " ${DIM}Enter this code in the dashboard to pair your device.${RESET}"
echo -e " ${DIM}Run 'zeroclaw gateway get-paircode --new' anytime to generate a fresh code.${RESET}"
fi
else
step_fail "Gateway service restart failed — re-run with zeroclaw service start"
fi
@@ -1525,7 +1568,6 @@ GATEWAY_PORT=42617
DASHBOARD_URL="http://127.0.0.1:${GATEWAY_PORT}"
echo
echo -e "${BOLD}Dashboard URL:${RESET} ${BLUE}${DASHBOARD_URL}${RESET}"
echo -e "${DIM} Run 'zeroclaw gateway get-paircode' to get your pairing code.${RESET}"
# --- Copy to clipboard ---
COPIED_TO_CLIPBOARD=false
+10
View File
@@ -0,0 +1,10 @@
# Allow the gpio group to control the Raspberry Pi onboard ACT LED
# via the Linux LED subsystem sysfs interface.
#
# Without this rule /sys/class/leds/ACT/{brightness,trigger} are
# root-only writable, which prevents zeroclaw from blinking the LED.
SUBSYSTEM=="leds", KERNEL=="ACT", ACTION=="add", \
RUN+="/bin/chgrp gpio /sys/%p/brightness", \
RUN+="/bin/chmod g+w /sys/%p/brightness", \
RUN+="/bin/chgrp gpio /sys/%p/trigger", \
RUN+="/bin/chmod g+w /sys/%p/trigger"
+232
View File
@@ -0,0 +1,232 @@
# scripts/ — Raspberry Pi Deployment Guide
This directory contains everything needed to cross-compile ZeroClaw and deploy it to a Raspberry Pi over SSH.
## Contents
| File | Purpose |
|------|---------|
| `deploy-rpi.sh` | One-shot cross-compile and deploy script |
| `rpi-config.toml` | Production config template deployed to `~/.zeroclaw/config.toml` |
| `zeroclaw.service` | systemd unit file installed on the Pi |
| `99-act-led.rules` | udev rule for ACT LED sysfs access without sudo |
---
## Prerequisites
### Cross-compilation toolchain (pick one)
#### Option A — cargo-zigbuild (recommended for Apple Silicon)
```bash
brew install zig
cargo install cargo-zigbuild
rustup target add aarch64-unknown-linux-gnu
```
#### Option B — cross (Docker-based)
```bash
cargo install cross
rustup target add aarch64-unknown-linux-gnu
# Docker must be running
```
The deploy script auto-detects which tool is available, preferring `cargo-zigbuild`.
Force a specific tool with `CROSS_TOOL=zigbuild` or `CROSS_TOOL=cross`.
### Optional: passwordless SSH
If you can't use SSH key authentication, install `sshpass` and set the `RPI_PASS` environment variable:
```bash
brew install sshpass # macOS
sudo apt install sshpass # Linux
```
---
## Quick Start
```bash
RPI_HOST=raspberrypi.local RPI_USER=pi ./scripts/deploy-rpi.sh
```
After the first deploy, you must set your API key on the Pi (see [First-Time Setup](#first-time-setup)).
---
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `RPI_HOST` | `raspberrypi.local` | Pi hostname or IP address |
| `RPI_USER` | `pi` | SSH username |
| `RPI_PORT` | `22` | SSH port |
| `RPI_DIR` | `~/zeroclaw` | Remote directory for the binary and `.env` |
| `RPI_PASS` | _(unset)_ | SSH password — uses `sshpass` if set; key auth used otherwise |
| `CROSS_TOOL` | _(auto-detect)_ | Force `zigbuild` or `cross` |
---
## What the Deploy Script Does
1. **Cross-compile** — builds a release binary for `aarch64-unknown-linux-gnu` with `--features hardware,peripheral-rpi`.
2. **Stop service** — runs `sudo systemctl stop zeroclaw` on the Pi (continues if not yet installed).
3. **Create remote directory** — ensures `$RPI_DIR` exists on the Pi.
4. **Copy binary** — SCPs the compiled binary to `$RPI_DIR/zeroclaw`.
5. **Create `.env`** — writes an `.env` skeleton with an `ANTHROPIC_API_KEY=` placeholder to `$RPI_DIR/.env` with mode `600`. Skipped if the file already exists so an existing key is not overwritten.
6. **Deploy config** — copies `rpi-config.toml` to `~/.zeroclaw/config.toml`, preserving any `api_key` already present in the file.
7. **Install systemd service** — copies `zeroclaw.service` to `/etc/systemd/system/`, then enables and restarts it.
8. **Hardware permissions** — adds the deploy user to the `gpio` group, copies `99-act-led.rules` to `/etc/udev/rules.d/`, and resets the ACT LED trigger.
---
## First-Time Setup
After the first successful deploy, SSH into the Pi and fill in your API key:
```bash
ssh pi@raspberrypi.local
nano ~/zeroclaw/.env
# Set: ANTHROPIC_API_KEY=sk-ant-...
sudo systemctl restart zeroclaw
```
The `.env` is loaded by the systemd service as an `EnvironmentFile`.
---
## Interacting with ZeroClaw on the Pi
Once the service is running the gateway listens on port **8080**.
### Health check
```bash
curl http://raspberrypi.local:8080/health
```
### Send a message
```bash
curl -s -X POST http://raspberrypi.local:8080/api/chat \
-H 'Content-Type: application/json' \
-d '{"message": "What is the CPU temperature?"}' | jq .
```
### Stream a conversation
```bash
curl -N -s -X POST http://raspberrypi.local:8080/api/chat \
-H 'Content-Type: application/json' \
-H 'Accept: text/event-stream' \
-d '{"message": "List connected hardware devices", "stream": true}'
```
### Follow service logs
```bash
ssh pi@raspberrypi.local 'journalctl -u zeroclaw -f'
```
---
## Hardware Features
### GPIO tools
ZeroClaw is deployed with the `peripheral-rpi` feature, which enables two LLM-callable tools:
- **`gpio_read`** — reads a GPIO pin value via sysfs (`/sys/class/gpio/...`).
- **`gpio_write`** — writes a GPIO pin value.
These tools let the agent directly control hardware in response to natural-language instructions.
### ACT LED
The udev rule `99-act-led.rules` grants the `gpio` group write access to:
```
/sys/class/leds/ACT/trigger
/sys/class/leds/ACT/brightness
```
This allows toggling the Pi's green ACT LED without `sudo`.
### Aardvark I2C/SPI adapter
If a Total Phase Aardvark adapter is connected, the `hardware` feature enables I2C/SPI communication with external devices. No extra setup is needed — the device is auto-detected via USB.
---
## Files Deployed to the Pi
| Remote path | Source | Description |
|------------|--------|-------------|
| `~/zeroclaw/zeroclaw` | compiled binary | Main agent binary |
| `~/zeroclaw/.env` | created on first deploy | API key and environment variables |
| `~/.zeroclaw/config.toml` | `rpi-config.toml` | Agent configuration |
| `/etc/systemd/system/zeroclaw.service` | `zeroclaw.service` | systemd service unit |
| `/etc/udev/rules.d/99-act-led.rules` | `99-act-led.rules` | ACT LED permissions |
---
## Configuration
`rpi-config.toml` is the production config template. Key defaults:
- **Provider**: `anthropic-custom:https://api.z.ai/api/anthropic`
- **Model**: `claude-3-5-sonnet-20241022`
- **Autonomy**: `full`
- **Allowed shell commands**: `git`, `cargo`, `npm`, `mkdir`, `touch`, `cp`, `mv`, `ls`, `cat`, `grep`, `find`, `echo`, `pwd`, `wc`, `head`, `tail`, `date`
To customise, edit `~/.zeroclaw/config.toml` directly on the Pi and restart the service.
---
## Troubleshooting
### Service won't start
```bash
ssh pi@raspberrypi.local 'sudo systemctl status zeroclaw'
ssh pi@raspberrypi.local 'journalctl -u zeroclaw -n 50 --no-pager'
```
### GPIO permission denied
Make sure the deploy user is in the `gpio` group and that a fresh login session has been started:
```bash
ssh pi@raspberrypi.local 'groups'
# Should include: gpio
```
If the group was just added, log out and back in, or run `newgrp gpio`.
### Wrong architecture / binary won't run
Re-run the deploy script. Confirm the target:
```bash
ssh pi@raspberrypi.local 'file ~/zeroclaw/zeroclaw'
# Expected: ELF 64-bit LSB pie executable, ARM aarch64
```
### Force a specific cross-compilation tool
```bash
CROSS_TOOL=zigbuild RPI_HOST=raspberrypi.local ./scripts/deploy-rpi.sh
# or
CROSS_TOOL=cross RPI_HOST=raspberrypi.local ./scripts/deploy-rpi.sh
```
### Rebuild locally without deploying
```bash
cargo zigbuild --release \
--target aarch64-unknown-linux-gnu \
--features hardware,peripheral-rpi
```
+223
View File
@@ -0,0 +1,223 @@
#!/usr/bin/env bash
# deploy-rpi.sh — cross-compile ZeroClaw for Raspberry Pi and deploy via SSH.
#
# Cross-compilation (pick ONE — the script auto-detects):
#
# Option A — cargo-zigbuild (recommended; works on Apple Silicon + Intel, no Docker)
# brew install zig
# cargo install cargo-zigbuild
# rustup target add aarch64-unknown-linux-gnu
#
# Option B — cross (Docker-based; requires Docker Desktop running)
# cargo install cross
#
# Usage:
# RPI_HOST=raspberrypi.local RPI_USER=pi ./scripts/deploy-rpi.sh
#
# Optional env vars:
# RPI_HOST — hostname or IP of the Pi (default: raspberrypi.local)
# RPI_USER — SSH user on the Pi (default: pi)
# RPI_PORT — SSH port (default: 22)
# RPI_DIR — remote deployment dir (default: /home/$RPI_USER/zeroclaw)
# RPI_PASS — SSH password (uses sshpass) (default: prompt interactively)
# CROSS_TOOL — force "zigbuild" or "cross" (default: auto-detect)
set -euo pipefail
RPI_HOST="${RPI_HOST:-raspberrypi.local}"
RPI_USER="${RPI_USER:-pi}"
RPI_PORT="${RPI_PORT:-22}"
RPI_DIR="${RPI_DIR:-/home/${RPI_USER}/zeroclaw}"
TARGET="aarch64-unknown-linux-gnu"
FEATURES="hardware,peripheral-rpi"
BINARY="target/${TARGET}/release/zeroclaw"
SSH_OPTS="-p ${RPI_PORT} -o StrictHostKeyChecking=no -o ConnectTimeout=10"
# scp uses -P (uppercase) for port; ssh uses -p (lowercase)
SCP_OPTS="-P ${RPI_PORT} -o StrictHostKeyChecking=no -o ConnectTimeout=10"
# If RPI_PASS is set, wrap ssh/scp with sshpass for non-interactive auth.
SSH_CMD="ssh"
SCP_CMD="scp"
if [[ -n "${RPI_PASS:-}" ]]; then
if ! command -v sshpass &>/dev/null; then
echo "ERROR: RPI_PASS is set but sshpass is not installed."
echo " brew install hudochenkov/sshpass/sshpass"
exit 1
fi
SSH_CMD="sshpass -p ${RPI_PASS} ssh"
SCP_CMD="sshpass -p ${RPI_PASS} scp"
fi
echo "==> Building ZeroClaw for Raspberry Pi (${TARGET})"
echo " Features: ${FEATURES}"
echo " Target host: ${RPI_USER}@${RPI_HOST}:${RPI_PORT}"
echo ""
# ── 1. Cross-compile — auto-detect best available tool ───────────────────────
# Prefer cargo-zigbuild: it works on Apple Silicon without Docker and avoids
# the rustup-toolchain-install errors that affect cross v0.2.x on arm64 Macs.
_detect_cross_tool() {
if [[ "${CROSS_TOOL:-}" == "cross" ]]; then
echo "cross"; return
fi
if [[ "${CROSS_TOOL:-}" == "zigbuild" ]]; then
echo "zigbuild"; return
fi
if command -v cargo-zigbuild &>/dev/null && command -v zig &>/dev/null; then
echo "zigbuild"; return
fi
if command -v cross &>/dev/null; then
echo "cross"; return
fi
echo "none"
}
TOOL=$(_detect_cross_tool)
case "${TOOL}" in
zigbuild)
echo "==> Using cargo-zigbuild (Zig cross-linker)"
# Ensure the target sysroot is registered with rustup.
rustup target add "${TARGET}" 2>/dev/null || true
cargo zigbuild \
--target "${TARGET}" \
--features "${FEATURES}" \
--release
;;
cross)
echo "==> Using cross (Docker-based)"
# Verify Docker is running before handing off — gives a clear error message
# instead of the confusing rustup-toolchain failure from cross v0.2.x.
if ! docker info &>/dev/null; then
echo ""
echo "ERROR: Docker is not running."
echo " Start Docker Desktop and retry, or install cargo-zigbuild instead:"
echo " brew install zig && cargo install cargo-zigbuild"
echo " rustup target add ${TARGET}"
exit 1
fi
cross build \
--target "${TARGET}" \
--features "${FEATURES}" \
--release
;;
none)
echo ""
echo "ERROR: No cross-compilation tool found."
echo ""
echo "Install one of the following and retry:"
echo ""
echo " Option A — cargo-zigbuild (recommended; works on Apple Silicon, no Docker):"
echo " brew install zig"
echo " cargo install cargo-zigbuild"
echo " rustup target add ${TARGET}"
echo ""
echo " Option B — cross (requires Docker Desktop running):"
echo " cargo install cross"
echo ""
exit 1
;;
esac
echo ""
echo "==> Build complete: ${BINARY}"
ls -lh "${BINARY}"
# ── 2. Stop running service (if any) so binary can be overwritten ─────────────
echo ""
echo "==> Stopping zeroclaw service (if running)"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"sudo systemctl stop zeroclaw 2>/dev/null || true"
# ── 3. Create remote directory ────────────────────────────────────────────────
echo ""
echo "==> Creating remote directory ${RPI_DIR}"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "mkdir -p ${RPI_DIR}"
# ── 4. Deploy binary ──────────────────────────────────────────────────────────
echo ""
echo "==> Deploying binary to ${RPI_USER}@${RPI_HOST}:${RPI_DIR}/zeroclaw"
${SCP_CMD} ${SCP_OPTS} "${BINARY}" "${RPI_USER}@${RPI_HOST}:${RPI_DIR}/zeroclaw"
# ── 4. Create .env skeleton (if it doesn't exist) ────────────────────────────
ENV_DEST="${RPI_DIR}/.env"
echo ""
echo "==> Checking for ${ENV_DEST}"
# shellcheck disable=SC2029
if ${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "[ -f ${ENV_DEST} ]"; then
echo " .env already exists — skipping"
else
echo " Creating .env skeleton with 600 permissions"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"mkdir -p ${RPI_DIR} && \
printf '# Set your API key here\nANTHROPIC_API_KEY=sk-ant-\n' > ${ENV_DEST} && \
chmod 600 ${ENV_DEST}"
echo " IMPORTANT: edit ${ENV_DEST} on the Pi and set ANTHROPIC_API_KEY"
fi
# ── 5. Deploy config ─────────────────────────────────────────────────────────
CONFIG_DEST="/home/${RPI_USER}/.zeroclaw/config.toml"
echo ""
echo "==> Deploying config to ${CONFIG_DEST}"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" "mkdir -p /home/${RPI_USER}/.zeroclaw"
# Preserve existing api_key from the remote config if present.
# shellcheck disable=SC2029
EXISTING_API_KEY=$(${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"grep -m1 '^api_key' ${CONFIG_DEST} 2>/dev/null || true")
${SCP_CMD} ${SCP_OPTS} "scripts/rpi-config.toml" "${RPI_USER}@${RPI_HOST}:${CONFIG_DEST}"
if [[ -n "${EXISTING_API_KEY}" ]]; then
echo " Restoring existing api_key from previous config"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"sed -i 's|^# api_key = .*|${EXISTING_API_KEY}|' ${CONFIG_DEST}"
fi
# ── 6. Deploy and enable systemd service ─────────────────────────────────────
SERVICE_DEST="/etc/systemd/system/zeroclaw.service"
echo ""
echo "==> Installing systemd service (requires sudo on the Pi)"
${SCP_CMD} ${SCP_OPTS} "scripts/zeroclaw.service" "${RPI_USER}@${RPI_HOST}:/tmp/zeroclaw.service"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"sudo mv /tmp/zeroclaw.service ${SERVICE_DEST} && \
sudo systemctl daemon-reload && \
sudo systemctl enable zeroclaw && \
sudo systemctl restart zeroclaw && \
sudo systemctl status zeroclaw --no-pager || true"
# ── 7. Runtime permissions ───────────────────────────────────────────────────
echo ""
echo "==> Granting ${RPI_USER} access to GPIO group"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"sudo usermod -aG gpio ${RPI_USER} || true"
# ── 8. Reset ACT LED trigger so ZeroClaw can control it ──────────────────────
echo ""
echo "==> Installing udev rule for ACT LED sysfs access by gpio group"
${SCP_CMD} ${SCP_OPTS} "scripts/99-act-led.rules" "${RPI_USER}@${RPI_HOST}:/tmp/99-act-led.rules"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"sudo mv /tmp/99-act-led.rules /etc/udev/rules.d/99-act-led.rules && \
sudo udevadm control --reload-rules && \
sudo chgrp gpio /sys/class/leds/ACT/brightness /sys/class/leds/ACT/trigger 2>/dev/null || true && \
sudo chmod g+w /sys/class/leds/ACT/brightness /sys/class/leds/ACT/trigger 2>/dev/null || true"
echo ""
echo "==> Resetting ACT LED trigger (none)"
# shellcheck disable=SC2029
${SSH_CMD} ${SSH_OPTS} "${RPI_USER}@${RPI_HOST}" \
"echo none | sudo tee /sys/class/leds/ACT/trigger > /dev/null 2>&1 || true"
echo ""
echo "==> Deployment complete!"
echo ""
echo " ZeroClaw is running at http://${RPI_HOST}:8080"
echo " POST /api/chat — chat with the agent"
echo " GET /health — health check"
echo ""
echo " To check logs: ssh ${RPI_USER}@${RPI_HOST} 'journalctl -u zeroclaw -f'"
+631
View File
@@ -0,0 +1,631 @@
# ZeroClaw — Raspberry Pi production configuration
#
# Copy this to ~/.zeroclaw/config.toml on the Pi.
# deploy-rpi.sh does this automatically.
#
# API key is loaded from ~/.zeroclaw/.env (EnvironmentFile in systemd).
# Set it there as: ANTHROPIC_API_KEY=your-key-here
# Or set api_key directly below (not recommended for version control).
# api_key = ""
default_provider = "anthropic-custom:https://api.z.ai/api/anthropic"
default_model = "claude-3-5-sonnet-20241022"
default_temperature = 0.4
model_routes = []
embedding_routes = []
[model_providers]
[provider]
[observability]
backend = "none"
runtime_trace_mode = "none"
runtime_trace_path = "state/runtime-trace.jsonl"
runtime_trace_max_entries = 200
[autonomy]
level = "full"
workspace_only = false
allowed_commands = [
"git",
"npm",
"cargo",
"mkdir",
"touch",
"cp",
"mv",
"ls",
"cat",
"grep",
"find",
"echo",
"pwd",
"wc",
"head",
"tail",
"date",
]
command_context_rules = []
forbidden_paths = [
"/etc",
"/root",
"/home",
"/usr",
"/bin",
"/sbin",
"/lib",
"/opt",
"/boot",
"/dev",
"/proc",
"/sys",
"/var",
"/tmp",
"/mnt",
"~/.ssh",
"~/.gnupg",
"~/.aws",
"~/.config",
]
max_actions_per_hour = 100
max_cost_per_day_cents = 1000
require_approval_for_medium_risk = true
block_high_risk_commands = true
shell_env_passthrough = []
allow_sensitive_file_reads = false
allow_sensitive_file_writes = false
auto_approve = [
"file_read",
"memory_recall",
]
always_ask = []
allowed_roots = []
non_cli_excluded_tools = [
"shell",
"process",
"file_write",
"file_edit",
"git_operations",
"browser",
"browser_open",
"http_request",
"schedule",
"cron_add",
"cron_remove",
"cron_update",
"cron_run",
"memory_store",
"memory_forget",
"proxy_config",
"web_search_config",
"web_access_config",
"model_routing_config",
"channel_ack_config",
"pushover",
"composio",
"delegate",
"screenshot",
"image_info",
]
non_cli_approval_approvers = []
non_cli_natural_language_approval_mode = "direct"
[autonomy.non_cli_natural_language_approval_mode_by_channel]
[security]
roles = []
[security.sandbox]
backend = "auto"
firejail_args = []
[security.resources]
max_memory_mb = 512
max_cpu_time_seconds = 60
max_subprocesses = 10
memory_monitoring = true
[security.audit]
enabled = true
log_path = "audit.log"
max_size_mb = 100
sign_events = false
[security.otp]
enabled = true
method = "totp"
token_ttl_secs = 30
cache_valid_secs = 300
gated_actions = [
"shell",
"file_write",
"browser_open",
"browser",
"memory_forget",
]
gated_domains = []
gated_domain_categories = []
challenge_delivery = "dm"
challenge_timeout_secs = 120
challenge_max_attempts = 3
[security.estop]
enabled = false
state_file = "~/.zeroclaw/estop-state.json"
require_otp_to_resume = true
[security.syscall_anomaly]
enabled = true
strict_mode = false
alert_on_unknown_syscall = true
max_denied_events_per_minute = 5
max_total_events_per_minute = 120
max_alerts_per_minute = 30
alert_cooldown_secs = 20
log_path = "syscall-anomalies.log"
baseline_syscalls = [
"read",
"write",
"open",
"openat",
"close",
"stat",
"fstat",
"newfstatat",
"lseek",
"mmap",
"mprotect",
"munmap",
"brk",
"rt_sigaction",
"rt_sigprocmask",
"ioctl",
"fcntl",
"access",
"pipe2",
"dup",
"dup2",
"dup3",
"epoll_create1",
"epoll_ctl",
"epoll_wait",
"poll",
"ppoll",
"select",
"futex",
"clock_gettime",
"nanosleep",
"getpid",
"gettid",
"set_tid_address",
"set_robust_list",
"clone",
"clone3",
"fork",
"execve",
"wait4",
"exit",
"exit_group",
"socket",
"connect",
"accept",
"accept4",
"listen",
"sendto",
"recvfrom",
"sendmsg",
"recvmsg",
"getsockname",
"getpeername",
"setsockopt",
"getsockopt",
"getrandom",
"statx",
]
[security.perplexity_filter]
enable_perplexity_filter = false
perplexity_threshold = 18.0
suffix_window_chars = 64
min_prompt_chars = 32
symbol_ratio_threshold = 0.2
[security.outbound_leak_guard]
enabled = true
action = "redact"
sensitivity = 0.7
[security.url_access]
block_private_ip = true
allow_cidrs = []
allow_domains = []
allow_loopback = false
require_first_visit_approval = false
enforce_domain_allowlist = false
domain_allowlist = []
domain_blocklist = []
approved_domains = []
[runtime]
kind = "native"
[runtime.docker]
image = "alpine:3.20"
network = "none"
memory_limit_mb = 512
cpu_limit = 1.0
read_only_rootfs = true
mount_workspace = true
allowed_workspace_roots = []
[runtime.wasm]
tools_dir = "tools/wasm"
fuel_limit = 1000000
memory_limit_mb = 64
max_module_size_mb = 50
allow_workspace_read = false
allow_workspace_write = false
allowed_hosts = []
[runtime.wasm.security]
require_workspace_relative_tools_dir = true
reject_symlink_modules = true
reject_symlink_tools_dir = true
strict_host_validation = true
capability_escalation_mode = "deny"
module_hash_policy = "warn"
[runtime.wasm.security.module_sha256]
[research]
enabled = false
trigger = "never"
keywords = [
"find",
"search",
"check",
"investigate",
"look",
"research",
"найди",
"проверь",
"исследуй",
"поищи",
]
min_message_length = 50
max_iterations = 5
show_progress = true
system_prompt_prefix = ""
[reliability]
provider_retries = 2
provider_backoff_ms = 500
fallback_providers = []
api_keys = []
channel_initial_backoff_secs = 2
channel_max_backoff_secs = 60
scheduler_poll_secs = 15
scheduler_retries = 2
[reliability.model_fallbacks]
[scheduler]
enabled = true
max_tasks = 64
max_concurrent = 4
[agent]
compact_context = true
max_tool_iterations = 20
max_history_messages = 50
parallel_tools = false
tool_dispatcher = "auto"
loop_detection_no_progress_threshold = 3
loop_detection_ping_pong_cycles = 2
loop_detection_failure_streak = 3
safety_heartbeat_interval = 5
safety_heartbeat_turn_interval = 10
[agent.session]
backend = "none"
strategy = "per-sender"
ttl_seconds = 3600
max_messages = 50
[agent.teams]
enabled = true
auto_activate = true
max_agents = 32
strategy = "adaptive"
load_window_secs = 120
inflight_penalty = 8
recent_selection_penalty = 2
recent_failure_penalty = 12
[agent.subagents]
enabled = true
auto_activate = true
max_concurrent = 10
strategy = "adaptive"
load_window_secs = 180
inflight_penalty = 10
recent_selection_penalty = 3
recent_failure_penalty = 16
queue_wait_ms = 15000
queue_poll_ms = 200
[skills]
open_skills_enabled = false
trusted_skill_roots = []
allow_scripts = false
prompt_injection_mode = "full"
[query_classification]
enabled = false
rules = []
[heartbeat]
enabled = false
interval_minutes = 30
[cron]
enabled = true
max_run_history = 50
[goal_loop]
enabled = false
interval_minutes = 10
step_timeout_secs = 120
max_steps_per_cycle = 3
[channels_config]
cli = true
message_timeout_secs = 300
[channels_config.webhook]
port = 8080
secret = "mytoken123"
[channels_config.ack_reaction]
[memory]
backend = "sqlite"
auto_save = true
hygiene_enabled = true
archive_after_days = 7
purge_after_days = 30
conversation_retention_days = 30
embedding_provider = "none"
embedding_model = "text-embedding-3-small"
embedding_dimensions = 1536
vector_weight = 0.7
keyword_weight = 0.3
min_relevance_score = 0.4
embedding_cache_size = 10000
chunk_max_tokens = 512
response_cache_enabled = false
response_cache_ttl_minutes = 60
response_cache_max_entries = 5000
snapshot_enabled = false
snapshot_on_hygiene = false
auto_hydrate = true
sqlite_journal_mode = "wal"
[memory.qdrant]
collection = "zeroclaw_memories"
[storage.provider.config]
provider = ""
schema = "public"
table = "memories"
tls = false
[tunnel]
provider = "none"
[gateway]
port = 8080
host = "0.0.0.0"
require_pairing = false
trusted_ips = ["0.0.0.0/0"]
allow_public_bind = true
paired_tokens = []
pair_rate_limit_per_minute = 10
webhook_rate_limit_per_minute = 60
trust_forwarded_headers = false
rate_limit_max_keys = 10000
idempotency_ttl_secs = 300
idempotency_max_keys = 10000
webhook_secret = "mytoken123"
[gateway.node_control]
enabled = false
allowed_node_ids = []
[composio]
enabled = false
entity_id = "default"
[secrets]
encrypt = true
[browser]
enabled = false
allowed_domains = []
browser_open = "default"
backend = "agent_browser"
auto_backend_priority = []
agent_browser_command = "agent-browser"
agent_browser_extra_args = []
agent_browser_timeout_ms = 30000
native_headless = true
native_webdriver_url = "http://127.0.0.1:9515"
[browser.computer_use]
endpoint = "http://127.0.0.1:8787/v1/actions"
timeout_ms = 15000
allow_remote_endpoint = false
window_allowlist = []
[http_request]
enabled = false
allowed_domains = []
max_response_size = 1000000
timeout_secs = 30
user_agent = "ZeroClaw/1.0"
[http_request.credential_profiles]
[multimodal]
max_images = 4
max_image_size_mb = 5
allow_remote_fetch = false
[web_fetch]
enabled = false
provider = "fast_html2md"
allowed_domains = ["*"]
blocked_domains = []
max_response_size = 500000
timeout_secs = 30
user_agent = "ZeroClaw/1.0"
[web_search]
enabled = false
provider = "duckduckgo"
fallback_providers = []
retries_per_provider = 0
retry_backoff_ms = 250
domain_filter = []
language_filter = []
exa_search_type = "auto"
exa_include_text = false
jina_site_filters = []
max_results = 5
timeout_secs = 15
user_agent = "ZeroClaw/1.0"
[proxy]
enabled = false
no_proxy = []
scope = "zeroclaw"
services = []
[identity]
format = "openclaw"
extra_files = []
[cost]
enabled = false
daily_limit_usd = 10.0
monthly_limit_usd = 100.0
warn_at_percent = 80
allow_override = false
[cost.prices."anthropic/claude-opus-4-20250514"]
input = 15.0
output = 75.0
[cost.prices."openai/gpt-4o"]
input = 5.0
output = 15.0
[cost.prices."openai/gpt-4o-mini"]
input = 0.15
output = 0.6
[cost.prices."anthropic/claude-sonnet-4-20250514"]
input = 3.0
output = 15.0
[cost.prices."openai/o1-preview"]
input = 15.0
output = 60.0
[cost.prices."anthropic/claude-3-haiku"]
input = 0.25
output = 1.25
[cost.prices."google/gemini-2.0-flash"]
input = 0.1
output = 0.4
[cost.prices."anthropic/claude-3.5-sonnet"]
input = 3.0
output = 15.0
[cost.prices."google/gemini-1.5-pro"]
input = 1.25
output = 5.0
[cost.enforcement]
mode = "warn"
route_down_model = "hint:fast"
reserve_percent = 10
[economic]
enabled = false
initial_balance = 1000.0
min_evaluation_threshold = 0.6
[economic.token_pricing]
input_price_per_million = 3.0
output_price_per_million = 15.0
[peripherals]
enabled = true
boards = []
[agents]
[coordination]
enabled = true
lead_agent = "delegate-lead"
max_inbox_messages_per_agent = 256
max_dead_letters = 256
max_context_entries = 512
max_seen_message_ids = 4096
[hooks]
enabled = true
[hooks.builtin]
boot_script = false
command_logger = false
session_memory = false
[plugins]
enabled = true
allow = []
deny = []
load_paths = []
[plugins.entries]
[hardware]
enabled = true
transport = "None"
baud_rate = 115200
workspace_datasheets = false
[transcription]
enabled = false
api_url = "https://api.groq.com/openai/v1/audio/transcriptions"
model = "whisper-large-v3-turbo"
max_duration_secs = 120
[agents_ipc]
enabled = false
db_path = "~/.zeroclaw/agents.db"
staleness_secs = 300
[mcp]
enabled = false
servers = []
[wasm]
enabled = true
memory_limit_mb = 64
fuel_limit = 1000000000
registry_url = "https://zeromarket.vercel.app/api"
+22
View File
@@ -0,0 +1,22 @@
[Unit]
Description=ZeroClaw AI Hardware Agent
Documentation=https://github.com/zeroclaw/zeroclaw
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=pi
SupplementaryGroups=gpio spi i2c
WorkingDirectory=/home/pi/zeroclaw
ExecStart=/home/pi/zeroclaw/zeroclaw gateway --host 0.0.0.0 --port 8080
Restart=on-failure
RestartSec=5
EnvironmentFile=/home/pi/zeroclaw/.env
Environment=RUST_LOG=info
# Expand ~ in config path
Environment=HOME=/home/pi
[Install]
WantedBy=multi-user.target
+81 -4
View File
@@ -45,6 +45,8 @@ pub struct Agent {
/// Pre-rendered security policy summary injected into the system prompt
/// so the LLM knows the concrete constraints before making tool calls.
security_summary: Option<String>,
/// Autonomy level from config; controls safety prompt instructions.
autonomy_level: crate::security::AutonomyLevel,
}
pub struct AgentBuilder {
@@ -71,6 +73,7 @@ pub struct AgentBuilder {
response_cache: Option<Arc<crate::memory::response_cache::ResponseCache>>,
tool_descriptions: Option<ToolDescriptions>,
security_summary: Option<String>,
autonomy_level: Option<crate::security::AutonomyLevel>,
}
impl AgentBuilder {
@@ -99,6 +102,7 @@ impl AgentBuilder {
response_cache: None,
tool_descriptions: None,
security_summary: None,
autonomy_level: None,
}
}
@@ -226,6 +230,11 @@ impl AgentBuilder {
self
}
pub fn autonomy_level(mut self, level: crate::security::AutonomyLevel) -> Self {
self.autonomy_level = Some(level);
self
}
pub fn build(self) -> Result<Agent> {
let mut tools = self
.tools
@@ -278,6 +287,9 @@ impl AgentBuilder {
response_cache: self.response_cache,
tool_descriptions: self.tool_descriptions,
security_summary: self.security_summary,
autonomy_level: self
.autonomy_level
.unwrap_or(crate::security::AutonomyLevel::Supervised),
})
}
}
@@ -318,7 +330,7 @@ impl Agent {
}
}
pub fn from_config(config: &Config) -> Result<Self> {
pub async fn from_config(config: &Config) -> Result<Self> {
let observer: Arc<dyn Observer> =
Arc::from(observability::create_observer(&config.observability));
let runtime: Arc<dyn runtime::RuntimeAdapter> =
@@ -347,7 +359,7 @@ impl Agent {
None
};
let (tools, _delegate_handle) = tools::all_tools_with_runtime(
let (mut tools, delegate_handle, _reaction_handle) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
@@ -361,8 +373,69 @@ impl Agent {
&config.agents,
config.api_key.as_deref(),
config,
None,
);
// ── Wire MCP tools (non-fatal) ─────────────────────────────
// Replicates the same MCP initialization logic used in the CLI
// and webhook paths (loop_.rs) so that the WebSocket/daemon UI
// path also has access to MCP tools.
if config.mcp.enabled && !config.mcp.servers.is_empty() {
tracing::info!(
"Initializing MCP client — {} server(s) configured",
config.mcp.servers.len()
);
match tools::McpRegistry::connect_all(&config.mcp.servers).await {
Ok(registry) => {
let registry = std::sync::Arc::new(registry);
if config.mcp.deferred_loading {
let deferred_set = tools::DeferredMcpToolSet::from_registry(
std::sync::Arc::clone(&registry),
)
.await;
tracing::info!(
"MCP deferred: {} tool stub(s) from {} server(s)",
deferred_set.len(),
registry.server_count()
);
let activated = std::sync::Arc::new(std::sync::Mutex::new(
tools::ActivatedToolSet::new(),
));
tools.push(Box::new(tools::ToolSearchTool::new(
deferred_set,
activated,
)));
} else {
let names = registry.tool_names();
let mut registered = 0usize;
for name in names {
if let Some(def) = registry.get_tool_def(&name).await {
let wrapper: std::sync::Arc<dyn tools::Tool> =
std::sync::Arc::new(tools::McpToolWrapper::new(
name,
def,
std::sync::Arc::clone(&registry),
));
if let Some(ref handle) = delegate_handle {
handle.write().push(std::sync::Arc::clone(&wrapper));
}
tools.push(Box::new(tools::ArcToolRef(wrapper)));
registered += 1;
}
}
tracing::info!(
"MCP: {} tool(s) registered from {} server(s)",
registered,
registry.server_count()
);
}
}
Err(e) => {
tracing::error!("MCP registry failed to initialize: {e:#}");
}
}
}
let provider_name = config.default_provider.as_deref().unwrap_or("openrouter");
let model_name = config
@@ -438,6 +511,7 @@ impl Agent {
.skills_prompt_mode(config.skills.prompt_injection_mode)
.auto_save(config.memory.auto_save)
.security_summary(Some(security.prompt_summary()))
.autonomy_level(config.autonomy.level)
.build()
}
@@ -480,6 +554,7 @@ impl Agent {
dispatcher_instructions: &instructions,
tool_descriptions: self.tool_descriptions.as_ref(),
security_summary: self.security_summary.clone(),
autonomy_level: self.autonomy_level,
};
self.prompt_builder.build(&ctx)
}
@@ -772,7 +847,7 @@ pub async fn run(
}
effective_config.default_temperature = temperature;
let mut agent = Agent::from_config(&effective_config)?;
let mut agent = Agent::from_config(&effective_config).await?;
let provider_name = effective_config
.default_provider
@@ -1116,7 +1191,9 @@ mod tests {
.extra_headers
.insert("X-Title".to_string(), "zeroclaw-web".to_string());
let mut agent = Agent::from_config(&config).expect("agent from config");
let mut agent = Agent::from_config(&config)
.await
.expect("agent from config");
let response = agent.turn("hello").await.expect("agent turn");
assert_eq!(response, "hello from mock");
+486 -24
View File
@@ -1,5 +1,8 @@
use crate::approval::{ApprovalManager, ApprovalRequest, ApprovalResponse};
use crate::config::schema::ModelPricing;
use crate::config::Config;
use crate::cost::types::{BudgetCheck, TokenUsage as CostTokenUsage};
use crate::cost::CostTracker;
use crate::i18n::ToolDescriptions;
use crate::memory::{self, Memory, MemoryCategory};
use crate::multimodal;
@@ -23,6 +26,108 @@ use std::time::{Duration, Instant};
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
// ── Cost tracking via task-local ──
/// Context for cost tracking within the tool call loop.
/// Scoped via `tokio::task_local!` at call sites (channels, gateway).
#[derive(Clone)]
pub(crate) struct ToolLoopCostTrackingContext {
pub tracker: Arc<CostTracker>,
pub prices: Arc<std::collections::HashMap<String, ModelPricing>>,
}
impl ToolLoopCostTrackingContext {
pub(crate) fn new(
tracker: Arc<CostTracker>,
prices: Arc<std::collections::HashMap<String, ModelPricing>>,
) -> Self {
Self { tracker, prices }
}
}
tokio::task_local! {
pub(crate) static TOOL_LOOP_COST_TRACKING_CONTEXT: Option<ToolLoopCostTrackingContext>;
}
/// 3-tier model pricing lookup:
/// 1. Direct model name
/// 2. Qualified `provider/model`
/// 3. Suffix after last `/`
fn lookup_model_pricing<'a>(
prices: &'a std::collections::HashMap<String, ModelPricing>,
provider_name: &str,
model: &str,
) -> Option<&'a ModelPricing> {
prices
.get(model)
.or_else(|| prices.get(&format!("{provider_name}/{model}")))
.or_else(|| {
model
.rsplit_once('/')
.and_then(|(_, suffix)| prices.get(suffix))
})
}
/// Record token usage from an LLM response via the task-local cost tracker.
/// Returns `(total_tokens, cost_usd)` on success, `None` when not scoped or no usage.
fn record_tool_loop_cost_usage(
provider_name: &str,
model: &str,
usage: &crate::providers::traits::TokenUsage,
) -> Option<(u64, f64)> {
let input_tokens = usage.input_tokens.unwrap_or(0);
let output_tokens = usage.output_tokens.unwrap_or(0);
let total_tokens = input_tokens.saturating_add(output_tokens);
if total_tokens == 0 {
return None;
}
let ctx = TOOL_LOOP_COST_TRACKING_CONTEXT
.try_with(Clone::clone)
.ok()
.flatten()?;
let pricing = lookup_model_pricing(&ctx.prices, provider_name, model);
let cost_usage = CostTokenUsage::new(
model,
input_tokens,
output_tokens,
pricing.map_or(0.0, |entry| entry.input),
pricing.map_or(0.0, |entry| entry.output),
);
if pricing.is_none() {
tracing::debug!(
provider = provider_name,
model,
"Cost tracking recorded token usage with zero pricing (no pricing entry found)"
);
}
if let Err(error) = ctx.tracker.record_usage(cost_usage.clone()) {
tracing::warn!(
provider = provider_name,
model,
"Failed to record cost tracking usage: {error}"
);
}
Some((cost_usage.total_tokens, cost_usage.cost_usd))
}
/// Check budget before an LLM call. Returns `None` when no cost tracking
/// context is scoped (tests, delegate, CLI without cost config).
pub(crate) fn check_tool_loop_budget() -> Option<BudgetCheck> {
TOOL_LOOP_COST_TRACKING_CONTEXT
.try_with(Clone::clone)
.ok()
.flatten()
.map(|ctx| {
ctx.tracker
.check_budget(0.0)
.unwrap_or(BudgetCheck::Allowed)
})
}
/// Minimum characters per chunk when relaying LLM text to a streaming draft.
const STREAM_CHUNK_MIN_CHARS: usize = 80;
@@ -256,6 +361,10 @@ pub(crate) const PROGRESS_MIN_INTERVAL_MS: u64 = 500;
/// Used before streaming the final answer so progress lines are replaced by the clean response.
pub(crate) const DRAFT_CLEAR_SENTINEL: &str = "\x00CLEAR\x00";
tokio::task_local! {
pub(crate) static TOOL_CHOICE_OVERRIDE: Option<String>;
}
/// Extract a short hint from tool call arguments for progress display.
fn truncate_tool_args_for_progress(name: &str, args: &serde_json::Value, max_len: usize) -> String {
let hint = match name {
@@ -461,7 +570,7 @@ async fn build_context(
let mut context = String::new();
// Pull relevant memories for this message
if let Ok(entries) = mem.recall(user_msg, 5, session_id).await {
if let Ok(entries) = mem.recall(user_msg, 5, session_id, None, None).await {
let relevant: Vec<_> = entries
.iter()
.filter(|e| match e.score {
@@ -2222,6 +2331,7 @@ pub(crate) async fn agent_turn(
dedup_exempt_tools,
activated_tools,
model_switch_callback,
&crate::config::PacingConfig::default(),
)
.await
}
@@ -2425,6 +2535,14 @@ fn should_execute_tools_in_parallel(
return false;
}
// tool_search activates deferred MCP tools into ActivatedToolSet.
// Running tool_search in parallel with the tools it activates causes a
// race condition where the tool lookup happens before activation completes.
// Force sequential execution whenever tool_search is in the batch.
if tool_calls.iter().any(|call| call.name == "tool_search") {
return false;
}
if let Some(mgr) = approval {
if tool_calls.iter().any(|call| mgr.needs_approval(&call.name)) {
// Approval-gated calls must keep sequential handling so the caller can
@@ -2523,6 +2641,7 @@ pub(crate) async fn run_tool_call_loop(
dedup_exempt_tools: &[String],
activated_tools: Option<&std::sync::Arc<std::sync::Mutex<crate::tools::ActivatedToolSet>>>,
model_switch_callback: Option<ModelSwitchCallback>,
pacing: &crate::config::PacingConfig,
) -> Result<String> {
let max_iterations = if max_tool_iterations == 0 {
DEFAULT_MAX_TOOL_ITERATIONS
@@ -2531,6 +2650,14 @@ pub(crate) async fn run_tool_call_loop(
};
let turn_id = Uuid::new_v4().to_string();
let loop_started_at = Instant::now();
let loop_ignore_tools: HashSet<&str> = pacing
.loop_ignore_tools
.iter()
.map(String::as_str)
.collect();
let mut consecutive_identical_outputs: usize = 0;
let mut last_tool_output_hash: Option<u64> = None;
for iteration in 0..max_iterations {
let mut seen_tool_signatures: HashSet<(String, String)> = HashSet::new();
@@ -2630,6 +2757,19 @@ pub(crate) async fn run_tool_call_loop(
hooks.fire_llm_input(history, model).await;
}
// Budget enforcement — block if limit exceeded (no-op when not scoped)
if let Some(BudgetCheck::Exceeded {
current_usd,
limit_usd,
period,
}) = check_tool_loop_budget()
{
return Err(anyhow::anyhow!(
"Budget exceeded: ${:.4} of ${:.2} {:?} limit. Cannot make further API calls until the budget resets.",
current_usd, limit_usd, period
));
}
// Unified path via Provider::chat so provider-specific native tool logic
// (OpenAI/Anthropic/OpenRouter/compatible adapters) is honored.
let request_tools = if use_native_tools {
@@ -2647,13 +2787,43 @@ pub(crate) async fn run_tool_call_loop(
temperature,
);
let chat_result = if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = chat_future => result,
// Wrap the LLM call with an optional per-step timeout from pacing config.
// This catches a truly hung model response without terminating the overall
// task loop (the per-message budget handles that separately).
let chat_result = match pacing.step_timeout_secs {
Some(step_secs) if step_secs > 0 => {
let step_timeout = Duration::from_secs(step_secs);
if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = tokio::time::timeout(step_timeout, chat_future) => {
match result {
Ok(inner) => inner,
Err(_) => anyhow::bail!(
"LLM inference step timed out after {step_secs}s (step_timeout_secs)"
),
}
},
}
} else {
match tokio::time::timeout(step_timeout, chat_future).await {
Ok(inner) => inner,
Err(_) => anyhow::bail!(
"LLM inference step timed out after {step_secs}s (step_timeout_secs)"
),
}
}
}
_ => {
if let Some(token) = cancellation_token.as_ref() {
tokio::select! {
() = token.cancelled() => return Err(ToolLoopCancelled.into()),
result = chat_future => result,
}
} else {
chat_future.await
}
}
} else {
chat_future.await
};
let (response_text, parsed_text, tool_calls, assistant_history_content, native_tool_calls) =
@@ -2675,6 +2845,12 @@ pub(crate) async fn run_tool_call_loop(
output_tokens: resp_output_tokens,
});
// Record cost via task-local tracker (no-op when not scoped)
let _ = resp
.usage
.as_ref()
.and_then(|usage| record_tool_loop_cost_usage(provider_name, model, usage));
let response_text = resp.text_or_empty().to_string();
// First try native structured tool calls (OpenAI-format).
// Fall back to text-based parsing (XML tags, markdown blocks,
@@ -3146,7 +3322,13 @@ pub(crate) async fn run_tool_call_loop(
ordered_results[*idx] = Some((call.name.clone(), call.tool_call_id.clone(), outcome));
}
// Collect tool results and build per-tool output for loop detection.
// Only non-ignored tool outputs contribute to the identical-output hash.
let mut detection_relevant_output = String::new();
for (tool_name, tool_call_id, outcome) in ordered_results.into_iter().flatten() {
if !loop_ignore_tools.contains(tool_name.as_str()) {
detection_relevant_output.push_str(&outcome.output);
}
individual_results.push((tool_call_id, outcome.output.clone()));
let _ = writeln!(
tool_results,
@@ -3155,6 +3337,53 @@ pub(crate) async fn run_tool_call_loop(
);
}
// ── Time-gated loop detection ──────────────────────────
// When pacing.loop_detection_min_elapsed_secs is set, identical-output
// loop detection activates after the task has been running that long.
// This avoids false-positive aborts on long-running browser/research
// workflows while keeping aggressive protection for quick tasks.
// When not configured, identical-output detection is disabled (preserving
// existing behavior where only max_iterations prevents runaway loops).
let loop_detection_active = match pacing.loop_detection_min_elapsed_secs {
Some(min_secs) => loop_started_at.elapsed() >= Duration::from_secs(min_secs),
None => false, // disabled when not configured (backwards compatible)
};
if loop_detection_active && !detection_relevant_output.is_empty() {
use std::hash::{Hash, Hasher};
let mut hasher = std::collections::hash_map::DefaultHasher::new();
detection_relevant_output.hash(&mut hasher);
let current_hash = hasher.finish();
if last_tool_output_hash == Some(current_hash) {
consecutive_identical_outputs += 1;
} else {
consecutive_identical_outputs = 0;
last_tool_output_hash = Some(current_hash);
}
// Bail if we see 3+ consecutive identical tool outputs (clear runaway).
if consecutive_identical_outputs >= 3 {
runtime_trace::record_event(
"tool_loop_identical_output_abort",
Some(channel_name),
Some(provider_name),
Some(model),
Some(&turn_id),
Some(false),
Some("identical tool output detected 3 consecutive times"),
serde_json::json!({
"iteration": iteration + 1,
"consecutive_identical": consecutive_identical_outputs,
}),
);
anyhow::bail!(
"Agent loop aborted: identical tool output detected {} consecutive times",
consecutive_identical_outputs
);
}
}
// Add assistant message with tool calls + tool results to history.
// Native mode: use JSON-structured messages so convert_messages() can
// reconstruct proper OpenAI-format tool_calls and tool result messages.
@@ -3296,7 +3525,7 @@ pub async fn run(
} else {
(None, None)
};
let (mut tools_registry, delegate_handle) = tools::all_tools_with_runtime(
let (mut tools_registry, delegate_handle, _reaction_handle) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
@@ -3310,6 +3539,7 @@ pub async fn run(
&config.agents,
config.api_key.as_deref(),
&config,
None,
);
let peripheral_tools: Vec<Box<dyn Tool>> =
@@ -3604,6 +3834,8 @@ pub async fn run(
Some(&config.autonomy),
native_tools,
config.skills.prompt_injection_mode,
config.agent.compact_context,
config.agent.max_system_prompt_chars,
);
// Append structured tool-use instructions with schemas (only for non-native providers)
@@ -3704,6 +3936,7 @@ pub async fn run(
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
Some(model_switch_callback.clone()),
&config.pacing,
)
.await
{
@@ -3931,6 +4164,7 @@ pub async fn run(
&config.agent.tool_call_dedup_exempt,
activated_handle.as_ref(),
Some(model_switch_callback.clone()),
&config.pacing,
)
.await
{
@@ -4051,21 +4285,23 @@ pub async fn process_message(
} else {
(None, None)
};
let (mut tools_registry, delegate_handle_pm) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
mem.clone(),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
);
let (mut tools_registry, delegate_handle_pm, _reaction_handle_pm) =
tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
mem.clone(),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
None,
);
let peripheral_tools: Vec<Box<dyn Tool>> =
crate::peripherals::create_peripheral_tools(&config.peripherals).await?;
tools_registry.extend(peripheral_tools);
@@ -4261,6 +4497,8 @@ pub async fn process_message(
Some(&config.autonomy),
native_tools,
config.skills.prompt_injection_mode,
config.agent.compact_context,
config.agent.max_system_prompt_chars,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry, Some(&i18n_descs)));
@@ -4828,6 +5066,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect_err("provider without vision support should fail");
@@ -4878,6 +5117,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect_err("oversized payload must fail");
@@ -4922,6 +5162,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("valid multimodal payload should pass");
@@ -5052,6 +5293,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("parallel execution should complete");
@@ -5122,6 +5364,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("cron_add delivery defaults should be injected");
@@ -5184,6 +5427,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("explicit delivery mode should be preserved");
@@ -5241,6 +5485,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("loop should finish after deduplicating repeated calls");
@@ -5310,6 +5555,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("non-interactive shell should succeed for low-risk command");
@@ -5370,6 +5616,7 @@ mod tests {
&exempt,
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("loop should finish with exempt tool executing twice");
@@ -5450,6 +5697,7 @@ mod tests {
&exempt,
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("loop should complete");
@@ -5507,6 +5755,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("native fallback id flow should complete");
@@ -5588,6 +5837,7 @@ mod tests {
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("native tool-call text should be relayed through on_delta");
@@ -6383,7 +6633,7 @@ Tail"#;
assert_eq!(mem.count().await.unwrap(), 2);
let recalled = mem.recall("45", 5, None).await.unwrap();
let recalled = mem.recall("45", 5, None, None, None).await.unwrap();
assert!(recalled.iter().any(|entry| entry.content.contains("45")));
}
@@ -7573,6 +7823,7 @@ Let me check the result."#;
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("tool loop should complete");
@@ -7650,4 +7901,215 @@ Let me check the result."#;
let result = filter_by_allowed_tools(specs, Some(&allowed));
assert!(result.is_empty());
}
// ── Cost tracking tests ──
#[tokio::test]
async fn cost_tracking_records_usage_when_scoped() {
use super::{
run_tool_call_loop, ToolLoopCostTrackingContext, TOOL_LOOP_COST_TRACKING_CONTEXT,
};
use crate::config::schema::ModelPricing;
use crate::cost::CostTracker;
use crate::observability::noop::NoopObserver;
use std::collections::HashMap;
let provider = ScriptedProvider {
responses: Arc::new(Mutex::new(VecDeque::from([ChatResponse {
text: Some("done".to_string()),
tool_calls: Vec::new(),
usage: Some(crate::providers::traits::TokenUsage {
input_tokens: Some(1_000),
output_tokens: Some(200),
cached_input_tokens: None,
}),
reasoning_content: None,
}]))),
capabilities: ProviderCapabilities::default(),
};
let observer = NoopObserver;
let workspace = tempfile::TempDir::new().unwrap();
let mut cost_config = crate::config::CostConfig {
enabled: true,
..crate::config::CostConfig::default()
};
cost_config.prices = HashMap::from([(
"mock-model".to_string(),
ModelPricing {
input: 3.0,
output: 15.0,
},
)]);
let tracker = Arc::new(CostTracker::new(cost_config.clone(), workspace.path()).unwrap());
let ctx = ToolLoopCostTrackingContext::new(
Arc::clone(&tracker),
Arc::new(cost_config.prices.clone()),
);
let mut history = vec![ChatMessage::system("test"), ChatMessage::user("hello")];
let result = TOOL_LOOP_COST_TRACKING_CONTEXT
.scope(
Some(ctx),
run_tool_call_loop(
&provider,
&mut history,
&[],
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"test",
None,
&crate::config::MultimodalConfig::default(),
2,
None,
None,
None,
&[],
&[],
None,
None,
&crate::config::PacingConfig::default(),
),
)
.await
.expect("tool loop should succeed");
assert_eq!(result, "done");
let summary = tracker.get_summary().unwrap();
assert_eq!(summary.request_count, 1);
assert_eq!(summary.total_tokens, 1_200);
assert!(summary.session_cost_usd > 0.0);
}
#[tokio::test]
async fn cost_tracking_enforces_budget() {
use super::{
run_tool_call_loop, ToolLoopCostTrackingContext, TOOL_LOOP_COST_TRACKING_CONTEXT,
};
use crate::config::schema::ModelPricing;
use crate::cost::CostTracker;
use crate::observability::noop::NoopObserver;
use std::collections::HashMap;
let provider = ScriptedProvider::from_text_responses(vec!["should not reach this"]);
let observer = NoopObserver;
let workspace = tempfile::TempDir::new().unwrap();
let cost_config = crate::config::CostConfig {
enabled: true,
daily_limit_usd: 0.001, // very low limit
..crate::config::CostConfig::default()
};
let tracker = Arc::new(CostTracker::new(cost_config.clone(), workspace.path()).unwrap());
// Record a usage that already exceeds the limit
tracker
.record_usage(crate::cost::types::TokenUsage::new(
"mock-model",
100_000,
50_000,
1.0,
1.0,
))
.unwrap();
let ctx = ToolLoopCostTrackingContext::new(
Arc::clone(&tracker),
Arc::new(HashMap::from([(
"mock-model".to_string(),
ModelPricing {
input: 1.0,
output: 1.0,
},
)])),
);
let mut history = vec![ChatMessage::system("test"), ChatMessage::user("hello")];
let err = TOOL_LOOP_COST_TRACKING_CONTEXT
.scope(
Some(ctx),
run_tool_call_loop(
&provider,
&mut history,
&[],
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"test",
None,
&crate::config::MultimodalConfig::default(),
2,
None,
None,
None,
&[],
&[],
None,
None,
&crate::config::PacingConfig::default(),
),
)
.await
.expect_err("should fail with budget exceeded");
assert!(
err.to_string().contains("Budget exceeded"),
"error should mention budget: {err}"
);
}
#[tokio::test]
async fn cost_tracking_is_noop_without_scope() {
use super::run_tool_call_loop;
use crate::observability::noop::NoopObserver;
// No TOOL_LOOP_COST_TRACKING_CONTEXT scoped — should run fine
let provider = ScriptedProvider {
responses: Arc::new(Mutex::new(VecDeque::from([ChatResponse {
text: Some("ok".to_string()),
tool_calls: Vec::new(),
usage: Some(crate::providers::traits::TokenUsage {
input_tokens: Some(500),
output_tokens: Some(100),
cached_input_tokens: None,
}),
reasoning_content: None,
}]))),
capabilities: ProviderCapabilities::default(),
};
let observer = NoopObserver;
let mut history = vec![ChatMessage::system("test"), ChatMessage::user("hello")];
let result = run_tool_call_loop(
&provider,
&mut history,
&[],
&observer,
"mock-provider",
"mock-model",
0.0,
true,
None,
"test",
None,
&crate::config::MultimodalConfig::default(),
2,
None,
None,
None,
&[],
&[],
None,
None,
&crate::config::PacingConfig::default(),
)
.await
.expect("should succeed without cost scope");
assert_eq!(result, "ok");
}
}
+7 -1
View File
@@ -43,7 +43,9 @@ impl MemoryLoader for DefaultMemoryLoader {
user_message: &str,
session_id: Option<&str>,
) -> anyhow::Result<String> {
let entries = memory.recall(user_message, self.limit, session_id).await?;
let entries = memory
.recall(user_message, self.limit, session_id, None, None)
.await?;
if entries.is_empty() {
return Ok(String::new());
}
@@ -102,6 +104,8 @@ mod tests {
_query: &str,
limit: usize,
_session_id: Option<&str>,
_since: Option<&str>,
_until: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
if limit == 0 {
return Ok(vec![]);
@@ -163,6 +167,8 @@ mod tests {
_query: &str,
_limit: usize,
_session_id: Option<&str>,
_since: Option<&str>,
_until: Option<&str>,
) -> anyhow::Result<Vec<MemoryEntry>> {
Ok(self.entries.as_ref().clone())
}
+108 -8
View File
@@ -1,6 +1,7 @@
use crate::config::IdentityConfig;
use crate::i18n::ToolDescriptions;
use crate::identity;
use crate::security::AutonomyLevel;
use crate::skills::Skill;
use crate::tools::Tool;
use anyhow::Result;
@@ -26,6 +27,10 @@ pub struct PromptContext<'a> {
/// (allowed commands, forbidden paths, autonomy level) so it can plan
/// tool calls without trial-and-error. See issue #2404.
pub security_summary: Option<String>,
/// Autonomy level from config. Controls whether the safety section
/// includes "ask before acting" instructions. Full autonomy omits them
/// so the model executes tools directly without simulating approval.
pub autonomy_level: AutonomyLevel,
}
pub trait PromptSection: Send + Sync {
@@ -177,14 +182,39 @@ impl PromptSection for SafetySection {
}
fn build(&self, ctx: &PromptContext<'_>) -> Result<String> {
let mut out = String::from(
"## Safety\n\n\
- Do not exfiltrate private data.\n\
- Do not run destructive commands without asking.\n\
- Do not bypass oversight or approval mechanisms.\n\
- Prefer `trash` over `rm`.\n\
- When in doubt, ask before acting externally.",
);
let mut out = String::from("## Safety\n\n- Do not exfiltrate private data.\n");
// Omit "ask before acting" instructions when autonomy is Full —
// mirrors build_system_prompt_with_mode_and_autonomy. See #3952.
if ctx.autonomy_level != AutonomyLevel::Full {
out.push_str(
"- Do not run destructive commands without asking.\n\
- Do not bypass oversight or approval mechanisms.\n",
);
}
out.push_str("- Prefer `trash` over `rm`.\n");
out.push_str(match ctx.autonomy_level {
AutonomyLevel::Full => {
"- Respect the runtime autonomy policy: if a tool or action is allowed, \
execute it directly instead of asking the user for extra approval.\n\
- If a tool or action is blocked by policy or unavailable, explain that \
concrete restriction instead of simulating an approval dialog."
}
AutonomyLevel::ReadOnly => {
"- This runtime is read-only for side effects unless a tool explicitly \
reports otherwise.\n\
- If a requested action is blocked by policy, explain the restriction \
directly instead of simulating an approval dialog."
}
AutonomyLevel::Supervised => {
"- When in doubt, ask before acting externally.\n\
- Respect the runtime autonomy policy: ask for approval only when the \
current runtime policy actually requires it.\n\
- If a tool or action is blocked by policy or unavailable, explain that \
concrete restriction instead of simulating an approval dialog."
}
});
// Append concrete security policy constraints when available (#2404).
// This tells the LLM exactly what commands are allowed, which paths
@@ -367,6 +397,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let section = IdentitySection;
@@ -397,6 +428,7 @@ mod tests {
dispatcher_instructions: "instr",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
assert!(prompt.contains("## Tools"));
@@ -434,6 +466,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let output = SkillsSection.build(&ctx).unwrap();
@@ -474,6 +507,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let output = SkillsSection.build(&ctx).unwrap();
@@ -501,6 +535,7 @@ mod tests {
dispatcher_instructions: "instr",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let rendered = DateTimeSection.build(&ctx).unwrap();
@@ -541,6 +576,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let prompt = SystemPromptBuilder::with_defaults().build(&ctx).unwrap();
@@ -574,6 +610,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: Some(summary.clone()),
autonomy_level: AutonomyLevel::Supervised,
};
let output = SafetySection.build(&ctx).unwrap();
@@ -608,6 +645,7 @@ mod tests {
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let output = SafetySection.build(&ctx).unwrap();
@@ -620,4 +658,66 @@ mod tests {
"should NOT contain security policy header when None"
);
}
#[test]
fn safety_section_full_autonomy_omits_approval_instructions() {
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Full,
};
let output = SafetySection.build(&ctx).unwrap();
assert!(
!output.contains("without asking"),
"full autonomy should NOT include 'ask before acting' instructions"
);
assert!(
!output.contains("bypass oversight"),
"full autonomy should NOT include 'bypass oversight' instructions"
);
assert!(
output.contains("execute it directly"),
"full autonomy should instruct to execute directly"
);
assert!(
output.contains("Do not exfiltrate"),
"full autonomy should still include data exfiltration guard"
);
}
#[test]
fn safety_section_supervised_includes_approval_instructions() {
let tools: Vec<Box<dyn Tool>> = vec![];
let ctx = PromptContext {
workspace_dir: Path::new("/tmp"),
model_name: "test-model",
tools: &tools,
skills: &[],
skills_prompt_mode: crate::config::SkillsPromptInjectionMode::Full,
identity_config: None,
dispatcher_instructions: "",
tool_descriptions: None,
security_summary: None,
autonomy_level: AutonomyLevel::Supervised,
};
let output = SafetySection.build(&ctx).unwrap();
assert!(
output.contains("without asking"),
"supervised should include 'ask before acting' instructions"
);
assert!(
output.contains("bypass oversight"),
"supervised should include 'bypass oversight' instructions"
);
}
}
+2 -2
View File
@@ -122,7 +122,7 @@ impl ApprovalManager {
}
// always_ask overrides everything.
if self.always_ask.contains(tool_name) {
if self.always_ask.contains("*") || self.always_ask.contains(tool_name) {
return true;
}
@@ -136,7 +136,7 @@ impl ApprovalManager {
}
// auto_approve skips the prompt.
if self.auto_approve.contains(tool_name) {
if self.auto_approve.contains("*") || self.auto_approve.contains(tool_name) {
return false;
}
+10
View File
@@ -338,6 +338,16 @@ pub fn extract_account_id_from_jwt(token: &str) -> Option<String> {
None
}
pub fn extract_expiry_from_jwt(token: &str) -> Option<chrono::DateTime<Utc>> {
let payload = token.split('.').nth(1)?;
let decoded = base64::engine::general_purpose::URL_SAFE_NO_PAD
.decode(payload)
.ok()?;
let claims: serde_json::Value = serde_json::from_slice(&decoded).ok()?;
let exp = claims.get("exp").and_then(|v| v.as_i64())?;
chrono::DateTime::<Utc>::from_timestamp(exp, 0)
}
async fn parse_token_response(response: reqwest::Response) -> Result<TokenSet> {
if !response.status().is_success() {
let status = response.status();
+10 -1
View File
@@ -18,6 +18,8 @@ pub struct DingTalkChannel {
/// Per-chat session webhooks for sending replies (chatID -> webhook URL).
/// DingTalk provides a unique webhook URL with each incoming message.
session_webhooks: Arc<RwLock<HashMap<String, String>>>,
/// Per-channel proxy URL override.
proxy_url: Option<String>,
}
/// Response from DingTalk gateway connection registration.
@@ -34,11 +36,18 @@ impl DingTalkChannel {
client_secret,
allowed_users,
session_webhooks: Arc::new(RwLock::new(HashMap::new())),
proxy_url: None,
}
}
/// Set a per-channel proxy URL that overrides the global proxy config.
pub fn with_proxy_url(mut self, proxy_url: Option<String>) -> Self {
self.proxy_url = proxy_url;
self
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.dingtalk")
crate::config::build_channel_proxy_client("channel.dingtalk", self.proxy_url.as_deref())
}
fn is_user_allowed(&self, user_id: &str) -> bool {
+10 -1
View File
@@ -18,6 +18,8 @@ pub struct DiscordChannel {
listen_to_bots: bool,
mention_only: bool,
typing_handles: Mutex<HashMap<String, tokio::task::JoinHandle<()>>>,
/// Per-channel proxy URL override.
proxy_url: Option<String>,
}
impl DiscordChannel {
@@ -35,11 +37,18 @@ impl DiscordChannel {
listen_to_bots,
mention_only,
typing_handles: Mutex::new(HashMap::new()),
proxy_url: None,
}
}
/// Set a per-channel proxy URL that overrides the global proxy config.
pub fn with_proxy_url(mut self, proxy_url: Option<String>) -> Self {
self.proxy_url = proxy_url;
self
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_runtime_proxy_client("channel.discord")
crate::config::build_channel_proxy_client("channel.discord", self.proxy_url.as_deref())
}
/// Check if a Discord user ID is in the allowlist.
+549
View File
@@ -0,0 +1,549 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use futures_util::{SinkExt, StreamExt};
use parking_lot::Mutex;
use serde_json::json;
use std::collections::HashMap;
use std::sync::Arc;
use tokio_tungstenite::tungstenite::Message;
use uuid::Uuid;
use crate::memory::{Memory, MemoryCategory};
/// Discord History channel — connects via Gateway WebSocket, stores ALL non-bot messages
/// to a dedicated discord.db, and forwards @mention messages to the agent.
pub struct DiscordHistoryChannel {
bot_token: String,
guild_id: Option<String>,
allowed_users: Vec<String>,
/// Channel IDs to watch. Empty = watch all channels.
channel_ids: Vec<String>,
/// Dedicated discord.db memory backend.
discord_memory: Arc<dyn Memory>,
typing_handles: Mutex<HashMap<String, tokio::task::JoinHandle<()>>>,
proxy_url: Option<String>,
/// When false, DM messages are not stored in discord.db.
store_dms: bool,
/// When false, @mentions in DMs are not forwarded to the agent.
respond_to_dms: bool,
}
impl DiscordHistoryChannel {
pub fn new(
bot_token: String,
guild_id: Option<String>,
allowed_users: Vec<String>,
channel_ids: Vec<String>,
discord_memory: Arc<dyn Memory>,
store_dms: bool,
respond_to_dms: bool,
) -> Self {
Self {
bot_token,
guild_id,
allowed_users,
channel_ids,
discord_memory,
typing_handles: Mutex::new(HashMap::new()),
proxy_url: None,
store_dms,
respond_to_dms,
}
}
pub fn with_proxy_url(mut self, proxy_url: Option<String>) -> Self {
self.proxy_url = proxy_url;
self
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_channel_proxy_client(
"channel.discord_history",
self.proxy_url.as_deref(),
)
}
fn is_user_allowed(&self, user_id: &str) -> bool {
if self.allowed_users.is_empty() {
return true; // default open for logging channel
}
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
}
fn is_channel_watched(&self, channel_id: &str) -> bool {
self.channel_ids.is_empty() || self.channel_ids.iter().any(|c| c == channel_id)
}
fn bot_user_id_from_token(token: &str) -> Option<String> {
let part = token.split('.').next()?;
base64_decode(part)
}
async fn resolve_channel_name(&self, channel_id: &str) -> String {
// 1. Check persistent database (via discord_memory)
let cache_key = format!("cache:channel_name:{}", channel_id);
if let Ok(Some(cached_mem)) = self.discord_memory.get(&cache_key).await {
// Check if it's still fresh (e.g., less than 24 hours old)
// Note: cached_mem.timestamp is an RFC3339 string
let is_fresh =
if let Ok(ts) = chrono::DateTime::parse_from_rfc3339(&cached_mem.timestamp) {
chrono::Utc::now().signed_duration_since(ts.with_timezone(&chrono::Utc))
< chrono::Duration::hours(24)
} else {
false
};
if is_fresh {
return cached_mem.content.clone();
}
}
// 2. Fetch from API (either not in DB or stale)
let url = format!("https://discord.com/api/v10/channels/{channel_id}");
let resp = self
.http_client()
.get(&url)
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await;
let name = if let Ok(r) = resp {
if let Ok(json) = r.json::<serde_json::Value>().await {
json.get("name")
.and_then(|n| n.as_str())
.map(|s| s.to_string())
.or_else(|| {
// For DMs, there might not be a 'name', use the recipient's username if available
json.get("recipients")
.and_then(|r| r.as_array())
.and_then(|a| a.first())
.and_then(|u| u.get("username"))
.and_then(|un| un.as_str())
.map(|s| format!("dm-{}", s))
})
} else {
None
}
} else {
None
};
let resolved = name.unwrap_or_else(|| channel_id.to_string());
// 3. Store in persistent database
let _ = self
.discord_memory
.store(
&cache_key,
&resolved,
crate::memory::MemoryCategory::Custom("channel_cache".to_string()),
Some(channel_id),
)
.await;
resolved
}
}
const BASE64_ALPHABET: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
#[allow(clippy::cast_possible_truncation)]
fn base64_decode(input: &str) -> Option<String> {
let padded = match input.len() % 4 {
2 => format!("{input}=="),
3 => format!("{input}="),
_ => input.to_string(),
};
let mut bytes = Vec::new();
let chars: Vec<u8> = padded.bytes().collect();
for chunk in chars.chunks(4) {
if chunk.len() < 4 {
break;
}
let mut v = [0usize; 4];
for (i, &b) in chunk.iter().enumerate() {
if b == b'=' {
v[i] = 0;
} else {
v[i] = BASE64_ALPHABET.iter().position(|&a| a == b)?;
}
}
bytes.push(((v[0] << 2) | (v[1] >> 4)) as u8);
if chunk[2] != b'=' {
bytes.push((((v[1] & 0xF) << 4) | (v[2] >> 2)) as u8);
}
if chunk[3] != b'=' {
bytes.push((((v[2] & 0x3) << 6) | v[3]) as u8);
}
}
String::from_utf8(bytes).ok()
}
fn contains_bot_mention(content: &str, bot_user_id: &str) -> bool {
if bot_user_id.is_empty() {
return false;
}
content.contains(&format!("<@{bot_user_id}>"))
|| content.contains(&format!("<@!{bot_user_id}>"))
}
fn strip_bot_mention(content: &str, bot_user_id: &str) -> String {
let mut result = content.to_string();
for tag in [format!("<@{bot_user_id}>"), format!("<@!{bot_user_id}>")] {
result = result.replace(&tag, " ");
}
result.trim().to_string()
}
#[async_trait]
impl Channel for DiscordHistoryChannel {
fn name(&self) -> &str {
"discord_history"
}
/// Send a reply back to Discord (used when agent responds to @mention).
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
let content = super::strip_tool_call_tags(&message.content);
let url = format!(
"https://discord.com/api/v10/channels/{}/messages",
message.recipient
);
self.http_client()
.post(&url)
.header("Authorization", format!("Bot {}", self.bot_token))
.json(&json!({"content": content}))
.send()
.await?;
Ok(())
}
#[allow(clippy::too_many_lines)]
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
let bot_user_id = Self::bot_user_id_from_token(&self.bot_token).unwrap_or_default();
// Get Gateway URL
let gw_resp: serde_json::Value = self
.http_client()
.get("https://discord.com/api/v10/gateway/bot")
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await?
.json()
.await?;
let gw_url = gw_resp
.get("url")
.and_then(|u| u.as_str())
.unwrap_or("wss://gateway.discord.gg");
let ws_url = format!("{gw_url}/?v=10&encoding=json");
tracing::info!("DiscordHistory: connecting to gateway...");
let (ws_stream, _) = tokio_tungstenite::connect_async(&ws_url).await?;
let (mut write, mut read) = ws_stream.split();
// Read Hello (opcode 10)
let hello = read.next().await.ok_or(anyhow::anyhow!("No hello"))??;
let hello_data: serde_json::Value = serde_json::from_str(&hello.to_string())?;
let heartbeat_interval = hello_data
.get("d")
.and_then(|d| d.get("heartbeat_interval"))
.and_then(serde_json::Value::as_u64)
.unwrap_or(41250);
// Identify with intents for guild + DM messages + message content
let identify = json!({
"op": 2,
"d": {
"token": self.bot_token,
"intents": 37377,
"properties": {
"os": "linux",
"browser": "zeroclaw",
"device": "zeroclaw"
}
}
});
write
.send(Message::Text(identify.to_string().into()))
.await?;
tracing::info!("DiscordHistory: connected and identified");
let mut sequence: i64 = -1;
let (hb_tx, mut hb_rx) = tokio::sync::mpsc::channel::<()>(1);
tokio::spawn(async move {
let mut interval =
tokio::time::interval(std::time::Duration::from_millis(heartbeat_interval));
loop {
interval.tick().await;
if hb_tx.send(()).await.is_err() {
break;
}
}
});
let guild_filter = self.guild_id.clone();
let discord_memory = Arc::clone(&self.discord_memory);
let store_dms = self.store_dms;
let respond_to_dms = self.respond_to_dms;
loop {
tokio::select! {
_ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break;
}
}
msg = read.next() => {
let msg = match msg {
Some(Ok(Message::Text(t))) => t,
Some(Ok(Message::Ping(payload))) => {
if write.send(Message::Pong(payload)).await.is_err() {
break;
}
continue;
}
Some(Ok(Message::Close(_))) | None => break,
Some(Err(e)) => {
tracing::warn!("DiscordHistory: websocket error: {e}");
break;
}
_ => continue,
};
let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
Ok(e) => e,
Err(_) => continue,
};
if let Some(s) = event.get("s").and_then(serde_json::Value::as_i64) {
sequence = s;
}
let op = event.get("op").and_then(serde_json::Value::as_u64).unwrap_or(0);
match op {
1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break;
}
continue;
}
7 => { tracing::warn!("DiscordHistory: Reconnect (op 7)"); break; }
9 => { tracing::warn!("DiscordHistory: Invalid Session (op 9)"); break; }
_ => {}
}
let event_type = event.get("t").and_then(|t| t.as_str()).unwrap_or("");
if event_type != "MESSAGE_CREATE" {
continue;
}
let Some(d) = event.get("d") else { continue };
// Skip messages from the bot itself
let author_id = d
.get("author")
.and_then(|a| a.get("id"))
.and_then(|i| i.as_str())
.unwrap_or("");
let username = d
.get("author")
.and_then(|a| a.get("username"))
.and_then(|i| i.as_str())
.unwrap_or(author_id);
if author_id == bot_user_id {
continue;
}
// Skip other bots
if d.get("author")
.and_then(|a| a.get("bot"))
.and_then(serde_json::Value::as_bool)
.unwrap_or(false)
{
continue;
}
let channel_id = d
.get("channel_id")
.and_then(|c| c.as_str())
.unwrap_or("")
.to_string();
// DM detection: DMs have no guild_id
let is_dm_event = d.get("guild_id").and_then(serde_json::Value::as_str).is_none();
// Resolve channel name (with cache)
let channel_display = if is_dm_event {
"dm".to_string()
} else {
self.resolve_channel_name(&channel_id).await
};
if is_dm_event && !store_dms && !respond_to_dms {
continue;
}
// Guild filter
if let Some(ref gid) = guild_filter {
let msg_guild = d.get("guild_id").and_then(serde_json::Value::as_str);
if let Some(g) = msg_guild {
if g != gid {
continue;
}
}
}
// Channel filter
if !self.is_channel_watched(&channel_id) {
continue;
}
if !self.is_user_allowed(author_id) {
continue;
}
let content = d.get("content").and_then(|c| c.as_str()).unwrap_or("");
let message_id = d.get("id").and_then(|i| i.as_str()).unwrap_or("");
let is_mention = contains_bot_mention(content, &bot_user_id);
// Collect attachment URLs
let attachments: Vec<String> = d
.get("attachments")
.and_then(|a| a.as_array())
.map(|arr| {
arr.iter()
.filter_map(|a| a.get("url").and_then(|u| u.as_str()))
.map(|u| u.to_string())
.collect()
})
.unwrap_or_default();
// Store messages to discord.db (skip DMs if store_dms=false)
if (!is_dm_event || store_dms) && (!content.is_empty() || !attachments.is_empty()) {
let ts = chrono::Utc::now().to_rfc3339();
let mut mem_content = format!(
"@{username} in #{channel_display} at {ts}: {content}"
);
if !attachments.is_empty() {
mem_content.push_str(" [attachments: ");
mem_content.push_str(&attachments.join(", "));
mem_content.push(']');
}
let mem_key = format!(
"discord_{}",
if message_id.is_empty() {
Uuid::new_v4().to_string()
} else {
message_id.to_string()
}
);
let channel_id_for_session = if channel_id.is_empty() {
None
} else {
Some(channel_id.as_str())
};
if let Err(err) = discord_memory
.store(
&mem_key,
&mem_content,
MemoryCategory::Custom("discord".to_string()),
channel_id_for_session,
)
.await
{
tracing::warn!("discord_history: failed to store message: {err}");
} else {
tracing::debug!(
"discord_history: stored message from @{username} in #{channel_display}"
);
}
}
// Forward @mention to agent (skip DMs if respond_to_dms=false)
if is_mention && (!is_dm_event || respond_to_dms) {
let clean_content = strip_bot_mention(content, &bot_user_id);
if clean_content.is_empty() {
continue;
}
let channel_msg = ChannelMessage {
id: if message_id.is_empty() {
Uuid::new_v4().to_string()
} else {
format!("discord_{message_id}")
},
sender: author_id.to_string(),
reply_target: if channel_id.is_empty() {
author_id.to_string()
} else {
channel_id.clone()
},
content: clean_content,
channel: "discord_history".to_string(),
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
interruption_scope_id: None,
};
if tx.send(channel_msg).await.is_err() {
break;
}
}
}
}
}
Ok(())
}
async fn health_check(&self) -> bool {
self.http_client()
.get("https://discord.com/api/v10/users/@me")
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await
.map(|r| r.status().is_success())
.unwrap_or(false)
}
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handles.lock();
if let Some(h) = guard.remove(recipient) {
h.abort();
}
let client = self.http_client();
let token = self.bot_token.clone();
let channel_id = recipient.to_string();
let handle = tokio::spawn(async move {
let url = format!("https://discord.com/api/v10/channels/{channel_id}/typing");
loop {
let _ = client
.post(&url)
.header("Authorization", format!("Bot {token}"))
.send()
.await;
tokio::time::sleep(std::time::Duration::from_secs(8)).await;
}
});
guard.insert(recipient.to_string(), handle);
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handles.lock();
if let Some(handle) = guard.remove(recipient) {
handle.abort();
}
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More