Compare commits

...

39 Commits

Author SHA1 Message Date
argenis de la rosa 2479b17523 fix: serialize env-dependent codex tests to prevent race (#4210)
Add a process-scoped Mutex that all env-var-mutating tests in
openai_codex::tests must hold.  This prevents std::env::set_var /
remove_var calls from racing when Rust's test harness runs them on
parallel threads.

Affected tests:
- resolve_responses_url_prefers_explicit_endpoint_env
- resolve_responses_url_uses_provider_api_url_override
- resolve_reasoning_effort_prefers_configured_override
- resolve_reasoning_effort_uses_legacy_env_when_unconfigured
2026-03-21 23:57:07 -04:00
Chris Hengge daca2d9354 fix(web/tools): make section headings collapsible (#4180)
Agent Tools and CLI Tools section headings were static divs with no
way to collapse sections the user is not interested in, making the
page unwieldy with a large tool set.

- Convert both section heading divs to button elements toggling
  agentSectionOpen / cliSectionOpen state (both default open)
- Section content renders conditionally on those booleans
- ChevronsUpDown icon added (already in lucide-react bundle) that
  fades in on hover and indicates collapsed/expanded state
- No change to individual tool card parameter schema expand/collapse

Risk: Low — UI state only, no API or logic change.
Does not change: search/filter behaviour, tool card expand/collapse,
CLI tools table structure.
2026-03-21 22:25:18 -04:00
Chris Hengge 3c1e710c38 fix(web/logs): layout, footer status indicator, and empty-state note (#4203)
Three issues addressed:

Empty page: the logs page shows nothing at idle because the SSE stream
only carries ObserverEvent variants (llm_request, tool_call, error,
agent_start, agent_end). Daemon stdout and RUST_LOG tracing output go
to the terminal/log file and are never forwarded to the broadcast
channel — this is correct behaviour, not a misconfiguration. Added a
dismissible informational banner explaining what appears on the stream
and how to access tracing output (RUST_LOG + terminal).

Layout: flex-1 log entries div was missing min-h-0, which can cause
flex children to refuse to shrink below content size in some browsers.

Connection indicator: moved from the toolbar (where it cluttered the
title and controls) to a compact footer strip below the scroll area,
matching the /agent page pattern exactly.

Also added colour rules for llm_request, agent_start, agent_end,
tool_call_start event types which previously fell through to default
grey.

Risk: Low — UI layout and informational copy only, no backend change.
Does not change: SSE connection logic, event parsing, pause/resume,
type filters, or the underlying broadcast observer.
2026-03-21 22:11:20 -04:00
Chris Hengge 0aefde95f2 fix(web/config): fill viewport and add TOML syntax highlighting (#4201)
Two issues with the config editor:

Layout: the page root had no height constraint and the textarea used
min-h-[500px] resize-y, causing independent scrollbars on both the
page and the editor. Fixed by adopting the Memory/Cron flex column
pattern so the editor fills the remaining viewport height with a single
scroll surface.

Highlighting: plain textarea with no visual structure for TOML.
Added a zero-dependency layered pre-overlay technique — no new npm
packages (per CLAUDE.md anti-pattern rules). A pre element sits
absolute behind a transparent textarea; highlightToml() produces HTML
colour-coding sections, keys, strings, booleans, numbers, datetimes,
and comments via per-line regex. onScroll syncs the overlay. Tab key
inserts two spaces instead of leaving focus.

dangerouslySetInnerHTML used on the pre — content is the user's own
local config, not from the network, risk equivalent to any local editor.

Risk: Low-Medium — no API or backend change. New rendering logic
in editor only.
Does not change: save/load API calls, validation, sensitive field
masking behaviour.
2026-03-21 22:11:18 -04:00
Chris Hengge a84aa60554 fix(web/cron): contain table scroll within viewport (#4186)
The cron page used a block-flow root with no height constraint, causing
the jobs table to grow taller than the viewport and the page itself to
scroll. This was inconsistent with the Memory page pattern.

- Change page root to flex flex-col h-full matching Memory's layout
- Table wrapper gains flex-1 min-h-0 overflow-auto so it fills
  remaining height and scrolls both axes internally
- Table header already has position:sticky so it pins correctly
  inside the scrolling container with no CSS change needed

Risk: Low — layout only, no logic or API change.
Does not change: job CRUD, modal, catch-up toggle, run history panel.
2026-03-21 22:11:15 -04:00
Chris Hengge edd4b37325 fix(web/dashboard): rename channels card heading and add internal scroll (#4178)
The card heading used the key dashboard.active_channels ("Active Channels")
even though the card has a toggle between Active and All views, making the
static heading misleading. The channel list div had no height cap, causing
tall channel lists to stretch the card and break 3-column grid alignment.

- Change heading to t("dashboard.channels") — key already present in all
  three locales (zh/en/tr), no i18n changes needed
- Add overflow-y-auto max-h-48 pr-1 to the channel list wrapper so it
  scrolls internally instead of stretching the card
2026-03-21 22:09:00 -04:00
Argenis c5f0155061 Merge pull request #4193 from zeroclaw-labs/fix/reaction-tool
fix(tools): pass platform channel_id to reaction trait
2026-03-21 21:38:32 -04:00
argenis de la rosa 9ee06ed6fc merge: resolve conflicts with master (image_gen + sessions) 2026-03-21 21:18:46 -04:00
Argenis ac6b43e9f4 fix: remove unused channel_names field from DiscordHistoryChannel (#4199)
* feat: add discord history logging and search tool with persistent channel cache

* fix: remove unused channel_names field from DiscordHistoryChannel

The channel_names HashMap was declared and initialized but never used.
Channel name caching is handled via discord_memory.get()/store() with
the cache:channel_name: prefix. Remove the dead field.

* style: run cargo fmt on discord_history.rs

---------

Co-authored-by: ninenox <nisit15@hotmail.com>
2026-03-21 21:15:23 -04:00
Argenis 6c5573ad96 Merge pull request #4194 from zeroclaw-labs/fix/session-messaging-tools
fix(security): add enforcement and validation to session tools
2026-03-21 21:15:17 -04:00
Argenis 1d57a0d1e5 fix(web/tools): improve a11y in collapsible section headings (#4197)
* fix(web/tools): make section headings collapsible

Agent Tools and CLI Tools section headings were static divs with no
way to collapse sections the user is not interested in, making the
page unwieldy with a large tool set.

- Convert both section heading divs to button elements toggling
  agentSectionOpen / cliSectionOpen state (both default open)
- Section content renders conditionally on those booleans
- ChevronsUpDown icon added (already in lucide-react bundle) that
  fades in on hover and indicates collapsed/expanded state
- No change to individual tool card parameter schema expand/collapse

Risk: Low — UI state only, no API or logic change.
Does not change: search/filter behaviour, tool card expand/collapse,
CLI tools table structure.

* fix(web/tools): improve a11y and fix invalid HTML in collapsible sections

- Replace <h2> inside <button> with <span role="heading" aria-level={2}>
  to fix invalid HTML (heading elements not permitted in interactive content)
- Add aria-expanded attribute to section toggle buttons for screen readers
- Add aria-controls + id linking buttons to their controlled sections
- Replace ChevronsUpDown with ChevronDown icon — ChevronsUpDown is
  visually symmetric so rotating 180deg has no visible effect; ChevronDown
  rotating to -90deg gives a clear directional cue
- Remove unused ChevronsUpDown import

---------

Co-authored-by: WareWolf-MoonWall <chris.hengge@gmail.com>
2026-03-21 21:02:10 -04:00
Argenis 9780c7d797 fix: restrict free command to Linux-only in security policy (#4198)
* fix: resolve claude-code test flakiness and update security policy

* fix: restrict `free` command to Linux-only in security policy

`free` is not available on macOS or other BSDs. Move it behind
a #[cfg(target_os = "linux")] gate so it is only included in the
default allowed commands on Linux systems.

---------

Co-authored-by: ninenox <nisit15@hotmail.com>
2026-03-21 21:02:05 -04:00
Argenis 35a5451a17 fix(channels): address critical security bugs in Gmail Pub/Sub push (#4200)
* feat(channels): add Gmail Pub/Sub push notifications for real-time email

Add GmailPushChannel that replaces IMAP polling with Google's Pub/Sub
push notification system for real-time email-driven automation.

- New channel at src/channels/gmail_push.rs implementing the Channel trait
- Registers Gmail watch subscription (POST /gmail/v1/users/me/watch)
  with automatic renewal before the 7-day expiry
- Handles incoming Pub/Sub notifications at POST /webhook/gmail
- Fetches new messages via Gmail History API (startHistoryId-based)
- Dispatches email messages to the agent with full metadata
- Sends replies via Gmail messages.send API
- Config: gmail_push.enabled, topic, label_filter, oauth_token,
  allowed_senders, webhook_url
- OAuth token encrypted at rest via existing secret store
- Webhook endpoint added to gateway router
- 30+ unit tests covering notification parsing, header extraction,
  body decoding, sender allowlist, and config serialization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): fix extract_body_text_plain test

Gmail API sends base64url without padding. The decode_body function
converted URL-safe chars back to standard base64 but did not restore
the padding, causing STANDARD decoder to fail and falling back to
snippet. Add padding restoration before decoding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): address critical security bugs in Gmail Pub/Sub push

- Add webhook authentication via shared secret (webhook_secret config
  field or GMAIL_PUSH_WEBHOOK_SECRET env var), preventing unauthorized
  message injection through the unauthenticated webhook endpoint
- Add 1MB body size limit on webhook endpoint to prevent memory exhaustion
- Fix race condition in handle_notification: hold history_id lock across
  the read-fetch-update cycle to prevent duplicate message processing
  when concurrent webhook notifications arrive
- Sanitize RFC 2822 headers (To/Subject) to prevent CRLF injection
  attacks that could add arbitrary headers to outgoing emails
- Fix extract_email_from_header panic on malformed angle brackets by
  using rfind('>') and validating bracket ordering
- Add 30s default HTTP client timeout for all Gmail API calls,
  preventing indefinite hangs
- Clone tx sender before message processing loop to avoid holding
  the mutex lock across network calls

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 20:59:56 -04:00
Argenis 8e81d44d54 fix(gateway): address critical security and reliability bugs in Live Canvas (#4196)
* feat(gateway): add Live Canvas (A2UI) tool and real-time web viewer

Add a Live Canvas system that enables the agent to push rendered content
(HTML, SVG, Markdown, text) to a web-visible canvas in real time.

Backend:
- src/tools/canvas.rs: CanvasTool with render/snapshot/clear/eval actions,
  backed by a shared CanvasStore (Arc<RwLock<HashMap>>) with per-canvas
  broadcast channels for real-time updates
- src/gateway/canvas.rs: REST endpoints (GET/POST/DELETE /api/canvas/:id,
  GET /api/canvas/:id/history, GET /api/canvas) and WebSocket endpoint
  (WS /ws/canvas/:id) for real-time frame delivery

Frontend:
- web/src/pages/Canvas.tsx: Canvas viewer page with WebSocket connection,
  iframe sandbox rendering, canvas switcher, frame history panel

Registration:
- CanvasTool registered in all_tools_with_runtime (always available)
- Canvas routes wired into gateway router
- CanvasStore added to AppState
- Canvas page added to App.tsx router and Sidebar navigation
- i18n keys added for en/zh/tr locales

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): share CanvasStore between tool and REST API

The CanvasTool and gateway AppState each created their own CanvasStore,
so content rendered via the tool never appeared in the REST API.

Create the CanvasStore once in the gateway, pass it to
all_tools_with_runtime via a new optional parameter, and reuse the
same instance in AppState.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(gateway): address critical security and reliability bugs in Live Canvas

- Validate content_type in REST POST endpoint against allowed set,
  preventing injection of "eval" frames via the REST API
- Enforce MAX_CONTENT_SIZE (256KB) limit on REST POST endpoint,
  matching tool-side validation to prevent memory exhaustion
- Add MAX_CANVAS_COUNT (100) limit to prevent unbounded canvas creation
  and memory exhaustion from CanvasStore
- Handle broadcast RecvError::Lagged in WebSocket handler gracefully
  instead of disconnecting the client
- Make MAX_CONTENT_SIZE and ALLOWED_CONTENT_TYPES pub for gateway reuse
- Update CanvasStore::render and subscribe to return Option for
  canvas count enforcement

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: rareba <rareba@users.noreply.github.com>
2026-03-21 20:59:18 -04:00
Argenis 86ad0c6a2b fix(channels): address critical bugs in voice wake word detection (#4191)
* feat(channels): add voice wake word detection channel

Add VoiceWakeChannel behind the `voice-wake` feature flag that:
- Captures audio from the default microphone via cpal
- Uses energy-based VAD to detect speech activity
- Transcribes speech via the existing transcription API (Whisper)
- Checks for a configurable wake word in the transcription
- On detection, captures the following utterance and dispatches it
  as a ChannelMessage

State machine: Listening -> Triggered -> Capturing -> Processing -> Listening

Config keys (under [channels_config.voice_wake]):
- wake_word (default: "hey zeroclaw")
- silence_timeout_ms (default: 2000)
- energy_threshold (default: 0.01)
- max_capture_secs (default: 30)

Includes tests for config parsing, state machine, RMS energy
computation, and WAV encoding.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(config): fix pre-existing test compilation errors in schema.rs

- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): exclude voice-wake from all-features CI check

Add a `ci-all` meta-feature in Cargo.toml that includes every feature
except `voice-wake`, which requires `libasound2-dev` (ALSA) not present
on CI runners. Update the check-all-features CI job to use
`--features ci-all` instead of `--all-features`.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix(channels): address critical bugs in voice wake word detection

- Replace std::mem::forget(stream) with dedicated thread that holds the
  cpal stream and shuts down cleanly via oneshot channel, preventing
  microphone resource leaks on task cancellation
- Add config validation: energy_threshold must be positive+finite,
  silence_timeout_ms >= 100ms, max_capture_secs clamped to 300
- Guard WAV encoding against u32 overflow for large audio buffers
- Add hard cap on capture_buf size to prevent unbounded memory growth
- Increase audio channel buffer from 4 to 64 slots to reduce chunk
  drops during transcription API calls
- Remove dead WakeState::Processing variant that was never entered

---------

Co-authored-by: Giulio V <vannini.gv@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 20:43:19 -04:00
Argenis 6ecf89d6a9 fix(ci): skip release and publish workflows on forks (#4190)
When a fork syncs with upstream, GitHub attributes the push to the fork
owner, causing release-beta-on-push and publish-crates-auto to run
under the wrong identity — leading to confusing notifications and
guaranteed failures (missing secrets).

Add repository guards to root jobs so the entire pipeline is skipped
on forks.
2026-03-21 20:42:55 -04:00
argenis de la rosa 691efa4d8c style: fix cargo fmt formatting in reaction tool 2026-03-21 20:38:24 -04:00
argenis de la rosa d1e3f435b4 style: fix cargo fmt formatting in session tools 2026-03-21 20:38:08 -04:00
Argenis 44c3e264ad Merge pull request #4192 from zeroclaw-labs/fix/image-gen-tool
fix(tools): harden image_gen security and model validation
2026-03-21 20:37:27 -04:00
argenis de la rosa f2b6013329 fix(tools): harden image_gen security enforcement and model validation
- Replace manual can_act()/record_action() with enforce_tool_operation()
  to match the codebase convention used by all other tools (notion,
  memory_forget, claude_code, delegate, etc.), producing consistent
  error messages and avoiding logic duplication.

- Add model parameter validation to prevent URL path traversal attacks
  via crafted model identifiers (e.g. "../../evil-endpoint").

- Add tests for model traversal rejection and filename sanitization.
2026-03-21 20:08:51 -04:00
argenis de la rosa 05d3c51a30 fix(security): add security policy enforcement and input validation to session tools
SessionsSendTool was missing security gate enforcement entirely - any agent
could send messages to any session without security policy checks. Similarly,
SessionsHistoryTool had no security enforcement for reading session data.

Changes:
- Add SecurityPolicy field to SessionsHistoryTool (enforces ToolOperation::Read)
- Add SecurityPolicy field to SessionsSendTool (enforces ToolOperation::Act)
- Add session_id validation to reject empty or non-alphanumeric-only IDs
- Pass security policy from all_tools_with_runtime registration
- Add tests for empty session_id, non-alphanumeric session_id validation
2026-03-21 20:04:44 -04:00
argenis de la rosa 2ceda31ce2 fix(tools): pass platform channel_id to reaction trait instead of channel name
The reaction tool was passing the channel adapter name (e.g. "discord",
"slack") as the first argument to Channel::add_reaction() and
Channel::remove_reaction(), but the trait signature expects a
platform-specific channel_id (e.g. Discord channel snowflake, Slack
channel ID like "C0123ABCD"). This would cause all reaction API calls
to fail at the platform level.

Fixes:
- Add required "channel_id" parameter to the tool schema
- Extract and pass channel_id (not channel_name) to trait methods
- Update tool description to mention the new parameter
- Add MockChannel channel_id capture for test verification
- Add test asserting channel_id (not name) reaches the trait
- Update all existing tests to supply channel_id
2026-03-21 20:01:22 -04:00
Argenis 9069bc3c1f fix(agent): add system prompt budgeting for small-context models (#4185)
For models with small context windows (e.g. glm-4.5-air ~8K tokens),
the system prompt alone can exceed the limit. This adds:

- max_system_prompt_chars config option (default 0 = unlimited)
- compact_context now also compacts the system prompt: skips the
  Channel Capabilities section and shows only tool names
- Truncation with marker when prompt exceeds the budget

Users can set `max_system_prompt_chars = 8000` in [agent] config
to cap the system prompt for small-context models.

Closes #4124
2026-03-21 19:40:21 -04:00
Argenis 9319fe18da fix(approval): support wildcard * in auto_approve and always_ask (#4184)
auto_approve = ["*"] was doing exact string matching, so only the
literal tool name "*" was matched. Users expecting wildcard semantics
had every tool blocked in supervised mode.

Also adds "prompt exceeds max length" to the context-window error
detection hints (fixes GLM/ZAI error 1261 detection).

Closes #4127
2026-03-21 19:38:11 -04:00
Argenis cc454a86c8 fix(install): remove pairing code display from installer (#4176)
The gateway pairing code is now shown in the dashboard, so displaying
it in the installer output is redundant and cluttered (showed 3 codes).
2026-03-21 19:06:37 -04:00
Argenis 256e8ccebf chore: bump version to v0.5.6 (#4174)
Update version across all distribution manifests:
- Cargo.toml / Cargo.lock
- dist/aur/PKGBUILD + .SRCINFO
- dist/scoop/zeroclaw.json
2026-03-21 18:03:38 -04:00
Argenis 72c9e6b6ca fix(publish): publish aardvark-sys dep before main crate (#4172)
* fix(publish): add aardvark-sys version and publish it before main crate

- Add version = "0.1.0" to aardvark-sys path dependency in Cargo.toml
- Update all three publish workflows to publish aardvark-sys first
- Add aardvark-sys COPY to Dockerfile for workspace builds
- Fixes cargo publish failure: "dependency aardvark-sys does not
  specify a version"

* ci: publish aardvark-sys before main crate in all publish workflows

All three crates.io publish workflows now publish aardvark-sys first,
wait for indexing, then publish the main zeroclawlabs crate.
2026-03-21 16:20:50 -04:00
Argenis 755a129ca2 fix(install): use /dev/tty for sudo in curl|bash Xcode license accept (#4169)
When run via `curl | bash`, stdin is the curl pipe, so sudo cannot
prompt for a password. Redirect sudo's stdin from /dev/tty to reach
the real terminal, allowing the password prompt to work in piped
invocations.
2026-03-21 14:15:21 -04:00
Argenis 8b0d3684c5 fix(install): auto-accept Xcode license instead of bailing out (#4165)
Instead of exiting with a manual remediation step, the installer now
attempts to accept the Xcode/CLT license automatically via
`sudo xcodebuild -license accept`. Falls back to a clear error message
only if sudo fails (e.g. no terminal or password).
2026-03-21 13:57:38 -04:00
Giulio V cdb5ac1471 fix(tools): fix remove_reaction_success test
The output format used "{action}ed" which produced "removeed" for the
remove action. Use explicit past-tense mapping instead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:49:35 +01:00
Giulio V 67acb1a0bb fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:10:05 +01:00
Giulio V 9eac6bafef fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:09:48 +01:00
Giulio V a12f2ff439 fix(config): fix pre-existing test compilation errors in schema.rs
- Remove #[cfg(unix)] gate on `use tempfile::TempDir` import since
  TempDir is used unconditionally in bootstrap file tests
- Add explicit type annotations on tokio::fs::* calls to resolve
  type inference failures (create_dir_all, write, read_to_string)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 18:09:36 +01:00
Argenis a38a4d132e fix(hardware): drain stdin in subprocess test to prevent broken pipe flake (#4161)
* fix(hardware): drain stdin in subprocess test to prevent broken pipe flake

The test script did not consume stdin, so SubprocessTool's stdin write
raced against the process exit, causing intermittent EPIPE failures.
Add `cat > /dev/null` to drain stdin before producing output.

* style: format subprocess test
2026-03-21 12:19:53 -04:00
Argenis 48aba73d3a fix(install): always check Xcode license on macOS, not just with --install-system-deps (#4153)
The Xcode license test-compile was inside install_system_deps(), which
only runs when --install-system-deps is passed. On macOS the default
path skipped this entirely, so users hit `cc` exit code 69 deep in
cargo build. Move the check into the unconditional main flow so it
always fires on Darwin.
2026-03-21 11:29:36 -04:00
Argenis a1ab1e1a11 fix(install): use test-compile instead of xcrun for Xcode license detection (#4151)
xcrun --show-sdk-path can succeed even when the Xcode/CLT license has
not been accepted, so the previous check was ineffective. Replace it
with an actual test-compilation of a trivial C file, which reliably
triggers the exit-code-69 failure when the license is pending.
2026-03-21 11:03:07 -04:00
Giulio V f394abf35c feat(tools): add standalone image generation tool via fal.ai
Add ImageGenTool that exposes fal.ai Flux model image generation as a
standalone tool, decoupled from the LinkedIn client. The tool accepts a
text prompt, optional filename/size/model parameters, calls the fal.ai
synchronous API, downloads the result, and saves to workspace/images/.

- New src/tools/image_gen.rs with full Tool trait implementation
- New ImageGenConfig in schema.rs (enabled, default_model, api_key_env)
- Config-gated registration in all_tools_with_runtime
- Security: checks can_act() and record_action() before execution
- Comprehensive unit tests (prompt validation, API key, size enum,
  autonomy blocking, tool spec)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:17:28 +01:00
Giulio V 52e0271bd5 feat(tools): add emoji reaction tool for cross-channel reactions
Add ReactionTool that exposes Channel::add_reaction and
Channel::remove_reaction as an agent-callable tool. Uses a
late-binding ChannelMapHandle (Arc<RwLock<HashMap>>) pattern
so the tool can be constructed during tool registry init and
populated once channels are available in start_channels.

Parameters: channel, message_id, emoji, action (add/remove).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:15:25 +01:00
Giulio V 6c0a48efff feat(tools): add session list, history, and send tools for inter-agent messaging
Add three new tools in src/tools/sessions.rs:
- sessions_list: lists active sessions with channel, message count, last activity
- sessions_history: reads last N messages from a session by ID
- sessions_send: appends a message to a session for inter-agent communication

All tools operate on the SessionBackend trait, using the JSONL SessionStore
by default. Registered unconditionally in all_tools_with_runtime when the
sessions directory is accessible.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-21 15:07:18 +01:00
47 changed files with 6800 additions and 246 deletions
+1 -1
View File
@@ -154,7 +154,7 @@ jobs:
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check all features
run: cargo check --all-features --locked
run: cargo check --features ci-all --locked
docs-quality:
name: Docs Quality
+17
View File
@@ -19,6 +19,7 @@ env:
jobs:
detect-version-change:
name: Detect Version Bump
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
changed: ${{ steps.check.outputs.changed }}
@@ -102,6 +103,22 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
shell: bash
env:
+18
View File
@@ -67,6 +67,24 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
if: "!inputs.dry_run"
shell: bash
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
if: "!inputs.dry_run"
run: sleep 15
- name: Publish (dry run)
if: inputs.dry_run
run: cargo publish --dry-run --locked --allow-dirty --no-verify
@@ -21,6 +21,7 @@ env:
jobs:
version:
name: Resolve Version
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
version: ${{ steps.ver.outputs.version }}
@@ -40,6 +41,7 @@ jobs:
release-notes:
name: Generate Release Notes
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
@@ -130,6 +132,7 @@ jobs:
web:
name: Build Web Dashboard
if: github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
@@ -323,6 +323,21 @@ jobs:
- name: Clean web build artifacts
run: rm -rf web/node_modules web/src web/package.json web/package-lock.json web/tsconfig*.json web/vite.config.ts web/index.html
- name: Publish aardvark-sys to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
OUTPUT=$(cargo publish --locked --allow-dirty --no-verify -p aardvark-sys 2>&1) && exit 0
echo "$OUTPUT"
if echo "$OUTPUT" | grep -q 'already exists'; then
echo "::notice::aardvark-sys already on crates.io — skipping"
exit 0
fi
exit 1
- name: Wait for aardvark-sys to index
run: sleep 15
- name: Publish to crates.io
env:
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
Generated
+331 -3
View File
@@ -117,6 +117,28 @@ version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923"
[[package]]
name = "alsa"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed7572b7ba83a31e20d1b48970ee402d2e3e0537dcfe0a3ff4d6eb7508617d43"
dependencies = [
"alsa-sys",
"bitflags 2.11.0",
"cfg-if",
"libc",
]
[[package]]
name = "alsa-sys"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db8fee663d06c4e303404ef5f40488a53e062f89ba8bfed81f42325aafad1527"
dependencies = [
"libc",
"pkg-config",
]
[[package]]
name = "ambient-authority"
version = "0.0.2"
@@ -583,6 +605,24 @@ dependencies = [
"virtue",
]
[[package]]
name = "bindgen"
version = "0.72.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "993776b509cfb49c750f11b8f07a46fa23e0a1386ffc01fb1e7d343efc387895"
dependencies = [
"bitflags 2.11.0",
"cexpr",
"clang-sys",
"itertools 0.13.0",
"proc-macro2",
"quote",
"regex",
"rustc-hash",
"shlex",
"syn 2.0.117",
]
[[package]]
name = "bip39"
version = "2.2.2"
@@ -878,6 +918,21 @@ dependencies = [
"shlex",
]
[[package]]
name = "cesu8"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d43a04d8753f35258c91f8ec639f792891f748a1edbd759cf1dcea3382ad83c"
[[package]]
name = "cexpr"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6fac387a98bb7c37292057cffc56d62ecb629900026402633ae9160df93a8766"
dependencies = [
"nom 7.1.3",
]
[[package]]
name = "cff-parser"
version = "0.1.0"
@@ -1003,6 +1058,17 @@ dependencies = [
"zeroize",
]
[[package]]
name = "clang-sys"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b023947811758c97c59bf9d1c188fd619ad4718dcaa767947df1cadb14f39f4"
dependencies = [
"glob",
"libc",
"libloading",
]
[[package]]
name = "clap"
version = "4.6.0"
@@ -1086,6 +1152,16 @@ version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d07550c9036bf2ae0c684c4297d503f838287c83c53686d05370d0e139ae570"
[[package]]
name = "combine"
version = "4.6.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba5a308b75df32fe02788e748662718f03fde005016435c444eea572398219fd"
dependencies = [
"bytes",
"memchr",
]
[[package]]
name = "compression-codecs"
version = "0.4.37"
@@ -1209,6 +1285,49 @@ dependencies = [
"libm",
]
[[package]]
name = "coreaudio-rs"
version = "0.11.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "321077172d79c662f64f5071a03120748d5bb652f5231570141be24cfcd2bace"
dependencies = [
"bitflags 1.3.2",
"core-foundation-sys",
"coreaudio-sys",
]
[[package]]
name = "coreaudio-sys"
version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ceec7a6067e62d6f931a2baf6f3a751f4a892595bcec1461a3c94ef9949864b6"
dependencies = [
"bindgen",
]
[[package]]
name = "cpal"
version = "0.15.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "873dab07c8f743075e57f524c583985fbaf745602acbe916a01539364369a779"
dependencies = [
"alsa",
"core-foundation-sys",
"coreaudio-rs",
"dasp_sample",
"jni",
"js-sys",
"libc",
"mach2 0.4.3",
"ndk",
"ndk-context",
"oboe",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
"windows",
]
[[package]]
name = "cpp_demangle"
version = "0.4.5"
@@ -1588,6 +1707,12 @@ dependencies = [
"parking_lot_core",
]
[[package]]
name = "dasp_sample"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c87e182de0887fd5361989c677c4e8f5000cd9491d6d563161a8f3a5519fc7f"
[[package]]
name = "data-encoding"
version = "2.10.0"
@@ -2944,7 +3069,7 @@ dependencies = [
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
"windows-core 0.62.2",
]
[[package]]
@@ -3395,6 +3520,28 @@ dependencies = [
"serde",
]
[[package]]
name = "jni"
version = "0.21.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a87aa2bb7d2af34197c04845522473242e1aa17c12f4935d5856491a7fb8c97"
dependencies = [
"cesu8",
"cfg-if",
"combine",
"jni-sys",
"log",
"thiserror 1.0.69",
"walkdir",
"windows-sys 0.45.0",
]
[[package]]
name = "jni-sys"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130"
[[package]]
name = "jobserver"
version = "0.1.34"
@@ -4242,6 +4389,35 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11ec1bc47d34ae756616f387c11fd0595f86f2cc7e6473bde9e3ded30cb902a1"
[[package]]
name = "ndk"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2076a31b7010b17a38c01907c45b945e8f11495ee4dd588309718901b1f7a5b7"
dependencies = [
"bitflags 2.11.0",
"jni-sys",
"log",
"ndk-sys",
"num_enum",
"thiserror 1.0.69",
]
[[package]]
name = "ndk-context"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "27b02d87554356db9e9a873add8782d4ea6e3e58ea071a9adb9a2e8ddb884a8b"
[[package]]
name = "ndk-sys"
version = "0.5.0+25.2.9519653"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c196769dd60fd4f363e11d948139556a344e79d451aeb2fa2fd040738ef7691"
dependencies = [
"jni-sys",
]
[[package]]
name = "negentropy"
version = "0.5.0"
@@ -4421,6 +4597,17 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050"
[[package]]
name = "num-derive"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ed3955f1a9c7c0c15e092f9c887db08b1fc683305fdf6eb6684f22555355e202"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "num-traits"
version = "0.2.19"
@@ -4440,6 +4627,28 @@ dependencies = [
"libc",
]
[[package]]
name = "num_enum"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d0bca838442ec211fa11de3a8b0e0e8f3a4522575b5c4c06ed722e005036f26"
dependencies = [
"num_enum_derive",
"rustversion",
]
[[package]]
name = "num_enum_derive"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "680998035259dcfcafe653688bf2aa6d3e2dc05e98be6ab46afb089dc84f1df8"
dependencies = [
"proc-macro-crate",
"proc-macro2",
"quote",
"syn 2.0.117",
]
[[package]]
name = "nusb"
version = "0.2.3"
@@ -4519,6 +4728,29 @@ dependencies = [
"ruzstd",
]
[[package]]
name = "oboe"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8b61bebd49e5d43f5f8cc7ee2891c16e0f41ec7954d36bcb6c14c5e0de867fb"
dependencies = [
"jni",
"ndk",
"ndk-context",
"num-derive",
"num-traits",
"oboe-sys",
]
[[package]]
name = "oboe-sys"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c8bb09a4a2b1d668170cfe0a7d5bc103f8999fb316c98099b6a9939c9f2e79d"
dependencies = [
"cc",
]
[[package]]
name = "once_cell"
version = "1.21.4"
@@ -8691,6 +8923,26 @@ dependencies = [
"wasmtime-internal-math",
]
[[package]]
name = "windows"
version = "0.54.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9252e5725dbed82865af151df558e754e4a3c2c30818359eb17465f1346a1b49"
dependencies = [
"windows-core 0.54.0",
"windows-targets 0.52.6",
]
[[package]]
name = "windows-core"
version = "0.54.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "12661b9c89351d684a50a8a643ce5f608e20243b9fb84687800163429f161d65"
dependencies = [
"windows-result 0.1.2",
"windows-targets 0.52.6",
]
[[package]]
name = "windows-core"
version = "0.62.2"
@@ -8700,7 +8952,7 @@ dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-result 0.4.1",
"windows-strings",
]
@@ -8732,6 +8984,15 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-result"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e383302e8ec8515204254685643de10811af0ed97ea37210dc26fb0032647f8"
dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "windows-result"
version = "0.4.1"
@@ -8750,6 +9011,15 @@ dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.45.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75283be5efb2831d37ea142365f009c02ec203cd29a3ebecbc093d52315b66d0"
dependencies = [
"windows-targets 0.42.2",
]
[[package]]
name = "windows-sys"
version = "0.52.0"
@@ -8786,6 +9056,21 @@ dependencies = [
"windows-link",
]
[[package]]
name = "windows-targets"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e5180c00cd44c9b1c88adb3693291f1cd93605ded80c250a75d472756b4d071"
dependencies = [
"windows_aarch64_gnullvm 0.42.2",
"windows_aarch64_msvc 0.42.2",
"windows_i686_gnu 0.42.2",
"windows_i686_msvc 0.42.2",
"windows_x86_64_gnu 0.42.2",
"windows_x86_64_gnullvm 0.42.2",
"windows_x86_64_msvc 0.42.2",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
@@ -8819,6 +9104,12 @@ dependencies = [
"windows_x86_64_msvc 0.53.1",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
@@ -8831,6 +9122,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a9d8416fa8b42f5c947f8482c43e7d89e73a173cead56d044f6a56104a6d1b53"
[[package]]
name = "windows_aarch64_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
@@ -8843,6 +9140,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b9d782e804c2f632e395708e99a94275910eb9100b2114651e04744e9b125006"
[[package]]
name = "windows_i686_gnu"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
@@ -8867,6 +9170,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa7359d10048f68ab8b09fa71c3daccfb0e9b559aed648a8f95469c27057180c"
[[package]]
name = "windows_i686_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
@@ -8879,6 +9188,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e7ac75179f18232fe9c285163565a57ef8d3c89254a30685b57d83a38d326c2"
[[package]]
name = "windows_x86_64_gnu"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
@@ -8891,6 +9206,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c3842cdd74a865a8066ab39c8a7a473c0778a3f29370b5fd6b4b9aa7df4a499"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
@@ -8903,6 +9224,12 @@ version = "0.53.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ffa179e2d07eee8ad8f57493436566c7cc30ac536a3379fdf008f47f6bb7ae1"
[[package]]
name = "windows_x86_64_msvc"
version = "0.42.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
@@ -9203,7 +9530,7 @@ dependencies = [
[[package]]
name = "zeroclawlabs"
version = "0.5.5"
version = "0.5.6"
dependencies = [
"aardvark-sys",
"anyhow",
@@ -9217,6 +9544,7 @@ dependencies = [
"clap",
"clap_complete",
"console",
"cpal",
"criterion",
"cron",
"dialoguer",
+28 -2
View File
@@ -4,7 +4,7 @@ resolver = "2"
[package]
name = "zeroclawlabs"
version = "0.5.5"
version = "0.5.6"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
@@ -97,7 +97,7 @@ anyhow = "1.0"
thiserror = "2.0"
# Aardvark I2C/SPI/GPIO USB adapter (Total Phase) — stub when SDK absent
aardvark-sys = { path = "crates/aardvark-sys" }
aardvark-sys = { path = "crates/aardvark-sys", version = "0.1.0" }
# UUID generation
uuid = { version = "1.22", default-features = false, features = ["v4", "std"] }
@@ -199,6 +199,9 @@ pdf-extract = { version = "0.10", optional = true }
# WASM plugin runtime (extism)
extism = { version = "1.20", optional = true }
# Cross-platform audio capture for voice wake word detection (optional, enable with --features voice-wake)
cpal = { version = "0.15", optional = true }
# Terminal QR rendering for WhatsApp Web pairing flow.
qrcode = { version = "0.14", optional = true }
@@ -252,8 +255,31 @@ rag-pdf = ["dep:pdf-extract"]
skill-creation = []
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
# voice-wake = Voice wake word detection via microphone (cpal)
voice-wake = ["dep:cpal"]
# WASM plugin system (extism-based)
plugins-wasm = ["dep:extism"]
# Meta-feature for CI: all features except those requiring system C libraries
# not available on standard CI runners (e.g., voice-wake needs libasound2-dev).
ci-all = [
"channel-nostr",
"hardware",
"channel-matrix",
"channel-lark",
"memory-postgres",
"memory-mem0",
"observability-prometheus",
"observability-otel",
"peripheral-rpi",
"browser-native",
"sandbox-landlock",
"sandbox-bubblewrap",
"probe",
"rag-pdf",
"skill-creation",
"whatsapp-web",
"plugins-wasm",
]
[profile.release]
opt-level = "z" # Optimize for size
+1
View File
@@ -27,6 +27,7 @@ COPY Cargo.toml Cargo.lock ./
# Previously we used sed to drop `crates/robot-kit`, which made the manifest disagree
# with the lockfile and caused `cargo --locked` to fail (Cargo refused to rewrite the lock).
COPY crates/robot-kit/ crates/robot-kit/
COPY crates/aardvark-sys/ crates/aardvark-sys/
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches \
&& echo "fn main() {}" > src/main.rs \
+2 -2
View File
@@ -1,6 +1,6 @@
pkgbase = zeroclaw
pkgdesc = Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.
pkgver = 0.5.5
pkgver = 0.5.6
pkgrel = 1
url = https://github.com/zeroclaw-labs/zeroclaw
arch = x86_64
@@ -10,7 +10,7 @@ pkgbase = zeroclaw
makedepends = git
depends = gcc-libs
depends = openssl
source = zeroclaw-0.5.5.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v0.5.5.tar.gz
source = zeroclaw-0.5.6.tar.gz::https://github.com/zeroclaw-labs/zeroclaw/archive/refs/tags/v0.5.6.tar.gz
sha256sums = SKIP
pkgname = zeroclaw
+1 -1
View File
@@ -1,6 +1,6 @@
# Maintainer: zeroclaw-labs <bot@zeroclaw.dev>
pkgname=zeroclaw
pkgver=0.5.5
pkgver=0.5.6
pkgrel=1
pkgdesc="Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant."
arch=('x86_64')
+2 -2
View File
@@ -1,11 +1,11 @@
{
"version": "0.5.5",
"version": "0.5.6",
"description": "Zero overhead. Zero compromise. 100% Rust. The fastest, smallest AI assistant.",
"homepage": "https://github.com/zeroclaw-labs/zeroclaw",
"license": "MIT|Apache-2.0",
"architecture": {
"64bit": {
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v0.5.5/zeroclaw-x86_64-pc-windows-msvc.zip",
"url": "https://github.com/zeroclaw-labs/zeroclaw/releases/download/v0.5.6/zeroclaw-x86_64-pc-windows-msvc.zip",
"hash": "",
"bin": "zeroclaw.exe"
}
+60 -25
View File
@@ -569,11 +569,29 @@ MSG
exit 0
fi
# Detect un-accepted Xcode/CLT license (causes `cc` to exit 69).
if ! /usr/bin/xcrun --show-sdk-path >/dev/null 2>&1; then
warn "Xcode license has not been accepted. Run:"
warn " sudo xcodebuild -license accept"
warn "then re-run this installer."
exit 1
# xcrun --show-sdk-path can succeed even without an accepted license,
# so we test-compile a trivial C file which reliably triggers the error.
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
warn "Xcode/CLT license has not been accepted. Attempting to accept it now..."
_xcode_accept_ok=false
if [[ "$(id -u)" -eq 0 ]]; then
xcodebuild -license accept && _xcode_accept_ok=true
elif [[ -c /dev/tty ]] && have_cmd sudo; then
sudo xcodebuild -license accept < /dev/tty && _xcode_accept_ok=true
fi
if [[ "$_xcode_accept_ok" == true ]]; then
step_ok "Xcode license accepted"
else
error "Could not accept Xcode license. Run manually:"
error " sudo xcodebuild -license accept"
error "then re-run this installer."
exit 1
fi
else
rm -f "$_xcode_test_file"
fi
if ! have_cmd git; then
warn "git is not available. Install git (e.g., Homebrew) and re-run bootstrap."
@@ -1175,6 +1193,43 @@ else
install_system_deps
fi
# Always check Xcode/CLT license on macOS, regardless of --install-system-deps.
# An un-accepted license causes `cc` to exit 69, breaking all Rust builds.
if [[ "$OS_NAME" == "Darwin" ]]; then
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
warn "Xcode/CLT license has not been accepted. Attempting to accept it now..."
# Use /dev/tty so sudo can prompt for a password even in a curl|bash pipe.
_xcode_accept_ok=false
if [[ "$(id -u)" -eq 0 ]]; then
xcodebuild -license accept && _xcode_accept_ok=true
elif [[ -c /dev/tty ]] && have_cmd sudo; then
sudo xcodebuild -license accept < /dev/tty && _xcode_accept_ok=true
fi
if [[ "$_xcode_accept_ok" == true ]]; then
step_ok "Xcode license accepted"
# Re-test compilation to confirm it's fixed.
_xcode_test_file="$(mktemp /tmp/zeroclaw-xcode-check.XXXXXX.c)"
printf 'int main(){return 0;}\n' > "$_xcode_test_file"
if ! cc -x c "$_xcode_test_file" -o /dev/null 2>/dev/null; then
rm -f "$_xcode_test_file"
error "C compiler still failing after license accept. Check your Xcode/CLT installation."
exit 1
fi
rm -f "$_xcode_test_file"
else
error "Could not accept Xcode license. Run manually:"
error " sudo xcodebuild -license accept"
error "then re-run this installer."
exit 1
fi
else
rm -f "$_xcode_test_file"
fi
fi
if [[ "$INSTALL_RUST" == true ]]; then
install_rust_toolchain
fi
@@ -1467,25 +1522,6 @@ if [[ -n "$ZEROCLAW_BIN" ]]; then
if "$ZEROCLAW_BIN" service restart 2>/dev/null; then
step_ok "Gateway service restarted"
# Fetch and display pairing code from running gateway
PAIR_CODE=""
for i in 1 2 3 4 5; do
sleep 2
if PAIR_CODE=$("$ZEROCLAW_BIN" gateway get-paircode 2>/dev/null | grep -oE '[0-9]{6}'); then
break
fi
done
if [[ -n "$PAIR_CODE" ]]; then
echo
echo -e " ${BOLD_BLUE}🔐 Gateway Pairing Code${RESET}"
echo
echo -e " ${BOLD_BLUE}┌──────────────┐${RESET}"
echo -e " ${BOLD_BLUE}${RESET} ${BOLD}${PAIR_CODE}${RESET} ${BOLD_BLUE}${RESET}"
echo -e " ${BOLD_BLUE}└──────────────┘${RESET}"
echo
echo -e " ${DIM}Enter this code in the dashboard to pair your device.${RESET}"
echo -e " ${DIM}Run 'zeroclaw gateway get-paircode --new' anytime to generate a fresh code.${RESET}"
fi
else
step_fail "Gateway service restart failed — re-run with zeroclaw service start"
fi
@@ -1532,7 +1568,6 @@ GATEWAY_PORT=42617
DASHBOARD_URL="http://127.0.0.1:${GATEWAY_PORT}"
echo
echo -e "${BOLD}Dashboard URL:${RESET} ${BLUE}${DASHBOARD_URL}${RESET}"
echo -e "${DIM} Run 'zeroclaw gateway get-paircode' to get your pairing code.${RESET}"
# --- Copy to clipboard ---
COPIED_TO_CLIPBOARD=false
+2 -1
View File
@@ -359,7 +359,7 @@ impl Agent {
None
};
let (mut tools, delegate_handle) = tools::all_tools_with_runtime(
let (mut tools, delegate_handle, _reaction_handle) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
@@ -373,6 +373,7 @@ impl Agent {
&config.agents,
config.api_key.as_deref(),
config,
None,
);
// ── Wire MCP tools (non-fatal) ─────────────────────────────
+23 -16
View File
@@ -3525,7 +3525,7 @@ pub async fn run(
} else {
(None, None)
};
let (mut tools_registry, delegate_handle) = tools::all_tools_with_runtime(
let (mut tools_registry, delegate_handle, _reaction_handle) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
@@ -3539,6 +3539,7 @@ pub async fn run(
&config.agents,
config.api_key.as_deref(),
&config,
None,
);
let peripheral_tools: Vec<Box<dyn Tool>> =
@@ -3833,6 +3834,8 @@ pub async fn run(
Some(&config.autonomy),
native_tools,
config.skills.prompt_injection_mode,
config.agent.compact_context,
config.agent.max_system_prompt_chars,
);
// Append structured tool-use instructions with schemas (only for non-native providers)
@@ -4282,21 +4285,23 @@ pub async fn process_message(
} else {
(None, None)
};
let (mut tools_registry, delegate_handle_pm) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
mem.clone(),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
);
let (mut tools_registry, delegate_handle_pm, _reaction_handle_pm) =
tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
mem.clone(),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
None,
);
let peripheral_tools: Vec<Box<dyn Tool>> =
crate::peripherals::create_peripheral_tools(&config.peripherals).await?;
tools_registry.extend(peripheral_tools);
@@ -4492,6 +4497,8 @@ pub async fn process_message(
Some(&config.autonomy),
native_tools,
config.skills.prompt_injection_mode,
config.agent.compact_context,
config.agent.max_system_prompt_chars,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(&tools_registry, Some(&i18n_descs)));
+2 -2
View File
@@ -122,7 +122,7 @@ impl ApprovalManager {
}
// always_ask overrides everything.
if self.always_ask.contains(tool_name) {
if self.always_ask.contains("*") || self.always_ask.contains(tool_name) {
return true;
}
@@ -136,7 +136,7 @@ impl ApprovalManager {
}
// auto_approve skips the prompt.
if self.auto_approve.contains(tool_name) {
if self.auto_approve.contains("*") || self.auto_approve.contains(tool_name) {
return false;
}
+549
View File
@@ -0,0 +1,549 @@
use super::traits::{Channel, ChannelMessage, SendMessage};
use async_trait::async_trait;
use futures_util::{SinkExt, StreamExt};
use parking_lot::Mutex;
use serde_json::json;
use std::collections::HashMap;
use std::sync::Arc;
use tokio_tungstenite::tungstenite::Message;
use uuid::Uuid;
use crate::memory::{Memory, MemoryCategory};
/// Discord History channel — connects via Gateway WebSocket, stores ALL non-bot messages
/// to a dedicated discord.db, and forwards @mention messages to the agent.
pub struct DiscordHistoryChannel {
bot_token: String,
guild_id: Option<String>,
allowed_users: Vec<String>,
/// Channel IDs to watch. Empty = watch all channels.
channel_ids: Vec<String>,
/// Dedicated discord.db memory backend.
discord_memory: Arc<dyn Memory>,
typing_handles: Mutex<HashMap<String, tokio::task::JoinHandle<()>>>,
proxy_url: Option<String>,
/// When false, DM messages are not stored in discord.db.
store_dms: bool,
/// When false, @mentions in DMs are not forwarded to the agent.
respond_to_dms: bool,
}
impl DiscordHistoryChannel {
pub fn new(
bot_token: String,
guild_id: Option<String>,
allowed_users: Vec<String>,
channel_ids: Vec<String>,
discord_memory: Arc<dyn Memory>,
store_dms: bool,
respond_to_dms: bool,
) -> Self {
Self {
bot_token,
guild_id,
allowed_users,
channel_ids,
discord_memory,
typing_handles: Mutex::new(HashMap::new()),
proxy_url: None,
store_dms,
respond_to_dms,
}
}
pub fn with_proxy_url(mut self, proxy_url: Option<String>) -> Self {
self.proxy_url = proxy_url;
self
}
fn http_client(&self) -> reqwest::Client {
crate::config::build_channel_proxy_client(
"channel.discord_history",
self.proxy_url.as_deref(),
)
}
fn is_user_allowed(&self, user_id: &str) -> bool {
if self.allowed_users.is_empty() {
return true; // default open for logging channel
}
self.allowed_users.iter().any(|u| u == "*" || u == user_id)
}
fn is_channel_watched(&self, channel_id: &str) -> bool {
self.channel_ids.is_empty() || self.channel_ids.iter().any(|c| c == channel_id)
}
fn bot_user_id_from_token(token: &str) -> Option<String> {
let part = token.split('.').next()?;
base64_decode(part)
}
async fn resolve_channel_name(&self, channel_id: &str) -> String {
// 1. Check persistent database (via discord_memory)
let cache_key = format!("cache:channel_name:{}", channel_id);
if let Ok(Some(cached_mem)) = self.discord_memory.get(&cache_key).await {
// Check if it's still fresh (e.g., less than 24 hours old)
// Note: cached_mem.timestamp is an RFC3339 string
let is_fresh =
if let Ok(ts) = chrono::DateTime::parse_from_rfc3339(&cached_mem.timestamp) {
chrono::Utc::now().signed_duration_since(ts.with_timezone(&chrono::Utc))
< chrono::Duration::hours(24)
} else {
false
};
if is_fresh {
return cached_mem.content.clone();
}
}
// 2. Fetch from API (either not in DB or stale)
let url = format!("https://discord.com/api/v10/channels/{channel_id}");
let resp = self
.http_client()
.get(&url)
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await;
let name = if let Ok(r) = resp {
if let Ok(json) = r.json::<serde_json::Value>().await {
json.get("name")
.and_then(|n| n.as_str())
.map(|s| s.to_string())
.or_else(|| {
// For DMs, there might not be a 'name', use the recipient's username if available
json.get("recipients")
.and_then(|r| r.as_array())
.and_then(|a| a.first())
.and_then(|u| u.get("username"))
.and_then(|un| un.as_str())
.map(|s| format!("dm-{}", s))
})
} else {
None
}
} else {
None
};
let resolved = name.unwrap_or_else(|| channel_id.to_string());
// 3. Store in persistent database
let _ = self
.discord_memory
.store(
&cache_key,
&resolved,
crate::memory::MemoryCategory::Custom("channel_cache".to_string()),
Some(channel_id),
)
.await;
resolved
}
}
const BASE64_ALPHABET: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
#[allow(clippy::cast_possible_truncation)]
fn base64_decode(input: &str) -> Option<String> {
let padded = match input.len() % 4 {
2 => format!("{input}=="),
3 => format!("{input}="),
_ => input.to_string(),
};
let mut bytes = Vec::new();
let chars: Vec<u8> = padded.bytes().collect();
for chunk in chars.chunks(4) {
if chunk.len() < 4 {
break;
}
let mut v = [0usize; 4];
for (i, &b) in chunk.iter().enumerate() {
if b == b'=' {
v[i] = 0;
} else {
v[i] = BASE64_ALPHABET.iter().position(|&a| a == b)?;
}
}
bytes.push(((v[0] << 2) | (v[1] >> 4)) as u8);
if chunk[2] != b'=' {
bytes.push((((v[1] & 0xF) << 4) | (v[2] >> 2)) as u8);
}
if chunk[3] != b'=' {
bytes.push((((v[2] & 0x3) << 6) | v[3]) as u8);
}
}
String::from_utf8(bytes).ok()
}
fn contains_bot_mention(content: &str, bot_user_id: &str) -> bool {
if bot_user_id.is_empty() {
return false;
}
content.contains(&format!("<@{bot_user_id}>"))
|| content.contains(&format!("<@!{bot_user_id}>"))
}
fn strip_bot_mention(content: &str, bot_user_id: &str) -> String {
let mut result = content.to_string();
for tag in [format!("<@{bot_user_id}>"), format!("<@!{bot_user_id}>")] {
result = result.replace(&tag, " ");
}
result.trim().to_string()
}
#[async_trait]
impl Channel for DiscordHistoryChannel {
fn name(&self) -> &str {
"discord_history"
}
/// Send a reply back to Discord (used when agent responds to @mention).
async fn send(&self, message: &SendMessage) -> anyhow::Result<()> {
let content = super::strip_tool_call_tags(&message.content);
let url = format!(
"https://discord.com/api/v10/channels/{}/messages",
message.recipient
);
self.http_client()
.post(&url)
.header("Authorization", format!("Bot {}", self.bot_token))
.json(&json!({"content": content}))
.send()
.await?;
Ok(())
}
#[allow(clippy::too_many_lines)]
async fn listen(&self, tx: tokio::sync::mpsc::Sender<ChannelMessage>) -> anyhow::Result<()> {
let bot_user_id = Self::bot_user_id_from_token(&self.bot_token).unwrap_or_default();
// Get Gateway URL
let gw_resp: serde_json::Value = self
.http_client()
.get("https://discord.com/api/v10/gateway/bot")
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await?
.json()
.await?;
let gw_url = gw_resp
.get("url")
.and_then(|u| u.as_str())
.unwrap_or("wss://gateway.discord.gg");
let ws_url = format!("{gw_url}/?v=10&encoding=json");
tracing::info!("DiscordHistory: connecting to gateway...");
let (ws_stream, _) = tokio_tungstenite::connect_async(&ws_url).await?;
let (mut write, mut read) = ws_stream.split();
// Read Hello (opcode 10)
let hello = read.next().await.ok_or(anyhow::anyhow!("No hello"))??;
let hello_data: serde_json::Value = serde_json::from_str(&hello.to_string())?;
let heartbeat_interval = hello_data
.get("d")
.and_then(|d| d.get("heartbeat_interval"))
.and_then(serde_json::Value::as_u64)
.unwrap_or(41250);
// Identify with intents for guild + DM messages + message content
let identify = json!({
"op": 2,
"d": {
"token": self.bot_token,
"intents": 37377,
"properties": {
"os": "linux",
"browser": "zeroclaw",
"device": "zeroclaw"
}
}
});
write
.send(Message::Text(identify.to_string().into()))
.await?;
tracing::info!("DiscordHistory: connected and identified");
let mut sequence: i64 = -1;
let (hb_tx, mut hb_rx) = tokio::sync::mpsc::channel::<()>(1);
tokio::spawn(async move {
let mut interval =
tokio::time::interval(std::time::Duration::from_millis(heartbeat_interval));
loop {
interval.tick().await;
if hb_tx.send(()).await.is_err() {
break;
}
}
});
let guild_filter = self.guild_id.clone();
let discord_memory = Arc::clone(&self.discord_memory);
let store_dms = self.store_dms;
let respond_to_dms = self.respond_to_dms;
loop {
tokio::select! {
_ = hb_rx.recv() => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break;
}
}
msg = read.next() => {
let msg = match msg {
Some(Ok(Message::Text(t))) => t,
Some(Ok(Message::Ping(payload))) => {
if write.send(Message::Pong(payload)).await.is_err() {
break;
}
continue;
}
Some(Ok(Message::Close(_))) | None => break,
Some(Err(e)) => {
tracing::warn!("DiscordHistory: websocket error: {e}");
break;
}
_ => continue,
};
let event: serde_json::Value = match serde_json::from_str(msg.as_ref()) {
Ok(e) => e,
Err(_) => continue,
};
if let Some(s) = event.get("s").and_then(serde_json::Value::as_i64) {
sequence = s;
}
let op = event.get("op").and_then(serde_json::Value::as_u64).unwrap_or(0);
match op {
1 => {
let d = if sequence >= 0 { json!(sequence) } else { json!(null) };
let hb = json!({"op": 1, "d": d});
if write.send(Message::Text(hb.to_string().into())).await.is_err() {
break;
}
continue;
}
7 => { tracing::warn!("DiscordHistory: Reconnect (op 7)"); break; }
9 => { tracing::warn!("DiscordHistory: Invalid Session (op 9)"); break; }
_ => {}
}
let event_type = event.get("t").and_then(|t| t.as_str()).unwrap_or("");
if event_type != "MESSAGE_CREATE" {
continue;
}
let Some(d) = event.get("d") else { continue };
// Skip messages from the bot itself
let author_id = d
.get("author")
.and_then(|a| a.get("id"))
.and_then(|i| i.as_str())
.unwrap_or("");
let username = d
.get("author")
.and_then(|a| a.get("username"))
.and_then(|i| i.as_str())
.unwrap_or(author_id);
if author_id == bot_user_id {
continue;
}
// Skip other bots
if d.get("author")
.and_then(|a| a.get("bot"))
.and_then(serde_json::Value::as_bool)
.unwrap_or(false)
{
continue;
}
let channel_id = d
.get("channel_id")
.and_then(|c| c.as_str())
.unwrap_or("")
.to_string();
// DM detection: DMs have no guild_id
let is_dm_event = d.get("guild_id").and_then(serde_json::Value::as_str).is_none();
// Resolve channel name (with cache)
let channel_display = if is_dm_event {
"dm".to_string()
} else {
self.resolve_channel_name(&channel_id).await
};
if is_dm_event && !store_dms && !respond_to_dms {
continue;
}
// Guild filter
if let Some(ref gid) = guild_filter {
let msg_guild = d.get("guild_id").and_then(serde_json::Value::as_str);
if let Some(g) = msg_guild {
if g != gid {
continue;
}
}
}
// Channel filter
if !self.is_channel_watched(&channel_id) {
continue;
}
if !self.is_user_allowed(author_id) {
continue;
}
let content = d.get("content").and_then(|c| c.as_str()).unwrap_or("");
let message_id = d.get("id").and_then(|i| i.as_str()).unwrap_or("");
let is_mention = contains_bot_mention(content, &bot_user_id);
// Collect attachment URLs
let attachments: Vec<String> = d
.get("attachments")
.and_then(|a| a.as_array())
.map(|arr| {
arr.iter()
.filter_map(|a| a.get("url").and_then(|u| u.as_str()))
.map(|u| u.to_string())
.collect()
})
.unwrap_or_default();
// Store messages to discord.db (skip DMs if store_dms=false)
if (!is_dm_event || store_dms) && (!content.is_empty() || !attachments.is_empty()) {
let ts = chrono::Utc::now().to_rfc3339();
let mut mem_content = format!(
"@{username} in #{channel_display} at {ts}: {content}"
);
if !attachments.is_empty() {
mem_content.push_str(" [attachments: ");
mem_content.push_str(&attachments.join(", "));
mem_content.push(']');
}
let mem_key = format!(
"discord_{}",
if message_id.is_empty() {
Uuid::new_v4().to_string()
} else {
message_id.to_string()
}
);
let channel_id_for_session = if channel_id.is_empty() {
None
} else {
Some(channel_id.as_str())
};
if let Err(err) = discord_memory
.store(
&mem_key,
&mem_content,
MemoryCategory::Custom("discord".to_string()),
channel_id_for_session,
)
.await
{
tracing::warn!("discord_history: failed to store message: {err}");
} else {
tracing::debug!(
"discord_history: stored message from @{username} in #{channel_display}"
);
}
}
// Forward @mention to agent (skip DMs if respond_to_dms=false)
if is_mention && (!is_dm_event || respond_to_dms) {
let clean_content = strip_bot_mention(content, &bot_user_id);
if clean_content.is_empty() {
continue;
}
let channel_msg = ChannelMessage {
id: if message_id.is_empty() {
Uuid::new_v4().to_string()
} else {
format!("discord_{message_id}")
},
sender: author_id.to_string(),
reply_target: if channel_id.is_empty() {
author_id.to_string()
} else {
channel_id.clone()
},
content: clean_content,
channel: "discord_history".to_string(),
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
thread_ts: None,
interruption_scope_id: None,
};
if tx.send(channel_msg).await.is_err() {
break;
}
}
}
}
}
Ok(())
}
async fn health_check(&self) -> bool {
self.http_client()
.get("https://discord.com/api/v10/users/@me")
.header("Authorization", format!("Bot {}", self.bot_token))
.send()
.await
.map(|r| r.status().is_success())
.unwrap_or(false)
}
async fn start_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handles.lock();
if let Some(h) = guard.remove(recipient) {
h.abort();
}
let client = self.http_client();
let token = self.bot_token.clone();
let channel_id = recipient.to_string();
let handle = tokio::spawn(async move {
let url = format!("https://discord.com/api/v10/channels/{channel_id}/typing");
loop {
let _ = client
.post(&url)
.header("Authorization", format!("Bot {token}"))
.send()
.await;
tokio::time::sleep(std::time::Duration::from_secs(8)).await;
}
});
guard.insert(recipient.to_string(), handle);
Ok(())
}
async fn stop_typing(&self, recipient: &str) -> anyhow::Result<()> {
let mut guard = self.typing_handles.lock();
if let Some(handle) = guard.remove(recipient) {
handle.abort();
}
Ok(())
}
}
File diff suppressed because it is too large Load Diff
+125 -29
View File
@@ -19,7 +19,9 @@ pub mod clawdtalk;
pub mod cli;
pub mod dingtalk;
pub mod discord;
pub mod discord_history;
pub mod email_channel;
pub mod gmail_push;
pub mod imessage;
pub mod irc;
#[cfg(feature = "channel-lark")]
@@ -45,6 +47,8 @@ pub mod traits;
pub mod transcription;
pub mod tts;
pub mod twitter;
#[cfg(feature = "voice-wake")]
pub mod voice_wake;
pub mod wati;
pub mod webhook;
pub mod wecom;
@@ -59,7 +63,9 @@ pub use clawdtalk::{ClawdTalkChannel, ClawdTalkConfig};
pub use cli::CliChannel;
pub use dingtalk::DingTalkChannel;
pub use discord::DiscordChannel;
pub use discord_history::DiscordHistoryChannel;
pub use email_channel::EmailChannel;
pub use gmail_push::GmailPushChannel;
pub use imessage::IMessageChannel;
pub use irc::IrcChannel;
#[cfg(feature = "channel-lark")]
@@ -82,6 +88,8 @@ pub use traits::{Channel, SendMessage};
#[allow(unused_imports)]
pub use tts::{TtsManager, TtsProvider};
pub use twitter::TwitterChannel;
#[cfg(feature = "voice-wake")]
pub use voice_wake::VoiceWakeChannel;
pub use wati::WatiChannel;
pub use webhook::WebhookChannel;
pub use wecom::WeComChannel;
@@ -3128,9 +3136,12 @@ pub fn build_system_prompt_with_mode(
Some(&autonomy_cfg),
native_tools,
skills_prompt_mode,
false,
0,
)
}
#[allow(clippy::too_many_arguments)]
pub fn build_system_prompt_with_mode_and_autonomy(
workspace_dir: &std::path::Path,
model_name: &str,
@@ -3141,6 +3152,8 @@ pub fn build_system_prompt_with_mode_and_autonomy(
autonomy_config: Option<&crate::config::AutonomyConfig>,
native_tools: bool,
skills_prompt_mode: crate::config::SkillsPromptInjectionMode,
compact_context: bool,
max_system_prompt_chars: usize,
) -> String {
use std::fmt::Write;
let mut prompt = String::with_capacity(8192);
@@ -3167,11 +3180,19 @@ pub fn build_system_prompt_with_mode_and_autonomy(
// ── 1. Tooling ──────────────────────────────────────────────
if !tools.is_empty() {
prompt.push_str("## Tools\n\n");
prompt.push_str("You have access to the following tools:\n\n");
for (name, desc) in tools {
let _ = writeln!(prompt, "- **{name}**: {desc}");
if compact_context {
// Compact mode: tool names only, no descriptions/schemas
prompt.push_str("Available tools: ");
let names: Vec<&str> = tools.iter().map(|(name, _)| *name).collect();
prompt.push_str(&names.join(", "));
prompt.push_str("\n\n");
} else {
prompt.push_str("You have access to the following tools:\n\n");
for (name, desc) in tools {
let _ = writeln!(prompt, "- **{name}**: {desc}");
}
prompt.push('\n');
}
prompt.push('\n');
}
// ── 1b. Hardware (when gpio/arduino tools present) ───────────
@@ -3315,11 +3336,13 @@ pub fn build_system_prompt_with_mode_and_autonomy(
std::env::consts::OS,
);
// ── 8. Channel Capabilities ─────────────────────────────────────
prompt.push_str("## Channel Capabilities\n\n");
prompt.push_str("- You are running as a messaging bot. Your response is automatically sent back to the user's channel.\n");
prompt.push_str("- You do NOT need to ask permission to respond — just respond directly.\n");
prompt.push_str(match autonomy_config.map(|cfg| cfg.level) {
// ── 8. Channel Capabilities (skipped in compact_context mode) ──
if !compact_context {
prompt.push_str("## Channel Capabilities\n\n");
prompt.push_str("- You are running as a messaging bot. Your response is automatically sent back to the user's channel.\n");
prompt
.push_str("- You do NOT need to ask permission to respond — just respond directly.\n");
prompt.push_str(match autonomy_config.map(|cfg| cfg.level) {
Some(crate::security::AutonomyLevel::Full) => {
"- If the runtime policy already allows a tool, use it directly; do not ask the user for extra approval.\n\
- Never pretend you are waiting for a human approval click or confirmation when the runtime policy already permits the action.\n\
@@ -3333,10 +3356,23 @@ pub fn build_system_prompt_with_mode_and_autonomy(
- If there is no approval path for this channel or the runtime blocks an action, explain that restriction directly instead of simulating an approval flow.\n"
}
});
prompt.push_str("- NEVER repeat, describe, or echo credentials, tokens, API keys, or secrets in your responses.\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n");
prompt.push_str("- When a user sends a voice note, it is automatically transcribed to text. Your text reply is automatically converted to a voice note and sent back. Do NOT attempt to generate audio yourself — TTS is handled by the channel.\n");
prompt.push_str("- NEVER narrate or describe your tool usage. Do NOT say 'Let me fetch...', 'I will use...', 'Searching...', or similar. Give the FINAL ANSWER only — no intermediate steps, no tool mentions, no progress updates.\n\n");
prompt.push_str("- NEVER repeat, describe, or echo credentials, tokens, API keys, or secrets in your responses.\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n");
prompt.push_str("- When a user sends a voice note, it is automatically transcribed to text. Your text reply is automatically converted to a voice note and sent back. Do NOT attempt to generate audio yourself — TTS is handled by the channel.\n");
prompt.push_str("- NEVER narrate or describe your tool usage. Do NOT say 'Let me fetch...', 'I will use...', 'Searching...', or similar. Give the FINAL ANSWER only — no intermediate steps, no tool mentions, no progress updates.\n\n");
} // end if !compact_context (Channel Capabilities)
// ── 9. Truncation (max_system_prompt_chars budget) ──────────
if max_system_prompt_chars > 0 && prompt.len() > max_system_prompt_chars {
// Truncate on a char boundary, keeping the top portion (identity + safety).
let mut end = max_system_prompt_chars;
// Ensure we don't split a multi-byte UTF-8 character.
while !prompt.is_char_boundary(end) && end > 0 {
end -= 1;
}
prompt.truncate(end);
prompt.push_str("\n\n[System prompt truncated to fit context budget]\n");
}
if prompt.is_empty() {
"You are ZeroClaw, a fast and efficient AI assistant built in Rust. Be helpful, concise, and direct."
@@ -3747,6 +3783,31 @@ fn collect_configured_channels(
});
}
if let Some(ref dh) = config.channels_config.discord_history {
match crate::memory::SqliteMemory::new_named(&config.workspace_dir, "discord") {
Ok(discord_mem) => {
channels.push(ConfiguredChannel {
display_name: "Discord History",
channel: Arc::new(
DiscordHistoryChannel::new(
dh.bot_token.clone(),
dh.guild_id.clone(),
dh.allowed_users.clone(),
dh.channel_ids.clone(),
Arc::new(discord_mem),
dh.store_dms,
dh.respond_to_dms,
)
.with_proxy_url(dh.proxy_url.clone()),
),
});
}
Err(e) => {
tracing::error!("discord_history: failed to open discord.db: {e}");
}
}
}
if let Some(ref sl) = config.channels_config.slack {
channels.push(ConfiguredChannel {
display_name: "Slack",
@@ -3938,6 +3999,15 @@ fn collect_configured_channels(
});
}
if let Some(ref gp_cfg) = config.channels_config.gmail_push {
if gp_cfg.enabled {
channels.push(ConfiguredChannel {
display_name: "Gmail Push",
channel: Arc::new(GmailPushChannel::new(gp_cfg.clone())),
});
}
}
if let Some(ref irc) = config.channels_config.irc {
channels.push(ConfiguredChannel {
display_name: "IRC",
@@ -4114,6 +4184,17 @@ fn collect_configured_channels(
});
}
#[cfg(feature = "voice-wake")]
if let Some(ref vw) = config.channels_config.voice_wake {
channels.push(ConfiguredChannel {
display_name: "VoiceWake",
channel: Arc::new(VoiceWakeChannel::new(
vw.clone(),
config.transcription.clone(),
)),
});
}
if let Some(ref wh) = config.channels_config.webhook {
channels.push(ConfiguredChannel {
display_name: "Webhook",
@@ -4264,22 +4345,22 @@ pub async fn start_channels(config: Config) -> Result<()> {
};
// Build system prompt from workspace identity files + skills
let workspace = config.workspace_dir.clone();
let (mut built_tools, delegate_handle_ch): (Vec<Box<dyn Tool>>, _) =
tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
Arc::clone(&mem),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&workspace,
&config.agents,
config.api_key.as_deref(),
&config,
);
let (mut built_tools, delegate_handle_ch, reaction_handle_ch) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
Arc::clone(&mem),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&workspace,
&config.agents,
config.api_key.as_deref(),
&config,
None,
);
// Wire MCP tools into the registry before freezing — non-fatal.
// When `deferred_loading` is enabled, MCP tools are NOT added eagerly.
@@ -4452,6 +4533,8 @@ pub async fn start_channels(config: Config) -> Result<()> {
Some(&config.autonomy),
native_tools,
config.skills.prompt_injection_mode,
config.agent.compact_context,
config.agent.max_system_prompt_chars,
);
if !native_tools {
system_prompt.push_str(&build_tool_instructions(
@@ -4551,6 +4634,15 @@ pub async fn start_channels(config: Config) -> Result<()> {
.map(|ch| (ch.name().to_string(), Arc::clone(ch)))
.collect::<HashMap<_, _>>(),
);
// Populate the reaction tool's channel map now that channels are initialized.
if let Some(ref handle) = reaction_handle_ch {
let mut map = handle.write();
for (name, ch) in channels_by_name.as_ref() {
map.insert(name.clone(), Arc::clone(ch));
}
}
let max_in_flight_messages = compute_max_in_flight_messages(channels.len());
println!(" 🚦 In-flight message limit: {max_in_flight_messages}");
@@ -7774,6 +7866,8 @@ BTC is currently around $65,000 based on latest tool output."#
Some(&config),
false,
crate::config::SkillsPromptInjectionMode::Full,
false,
0,
);
assert!(
@@ -7803,6 +7897,8 @@ BTC is currently around $65,000 based on latest tool output."#
Some(&config),
false,
crate::config::SkillsPromptInjectionMode::Full,
false,
0,
);
assert!(
+586
View File
@@ -0,0 +1,586 @@
//! Voice Wake Word detection channel.
//!
//! Listens on the default microphone via `cpal`, detects a configurable wake
//! word using energy-based VAD followed by transcription-based keyword matching,
//! then captures the subsequent utterance and dispatches it as a channel message.
//!
//! Gated behind the `voice-wake` Cargo feature.
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use anyhow::{bail, Result};
use async_trait::async_trait;
use tokio::sync::mpsc;
use tracing::{debug, info, warn};
use crate::channels::transcription::transcribe_audio;
use crate::config::schema::VoiceWakeConfig;
use crate::config::TranscriptionConfig;
use super::traits::{Channel, ChannelMessage, SendMessage};
// ── State machine ──────────────────────────────────────────────
/// Maximum allowed capture duration (seconds) to prevent unbounded memory growth.
const MAX_CAPTURE_SECS_LIMIT: u32 = 300;
/// Minimum silence timeout to prevent API hammering.
const MIN_SILENCE_TIMEOUT_MS: u32 = 100;
/// Internal states for the wake-word detector.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum WakeState {
/// Passively monitoring microphone energy levels.
Listening,
/// Energy spike detected — capturing a short window to check for wake word.
Triggered,
/// Wake word confirmed — capturing the full utterance that follows.
Capturing,
}
impl std::fmt::Display for WakeState {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Listening => write!(f, "Listening"),
Self::Triggered => write!(f, "Triggered"),
Self::Capturing => write!(f, "Capturing"),
}
}
}
// ── Channel implementation ─────────────────────────────────────
/// Voice wake-word channel that activates on a spoken keyword.
pub struct VoiceWakeChannel {
config: VoiceWakeConfig,
transcription_config: TranscriptionConfig,
}
impl VoiceWakeChannel {
/// Create a new `VoiceWakeChannel` from its config sections.
pub fn new(config: VoiceWakeConfig, transcription_config: TranscriptionConfig) -> Self {
Self {
config,
transcription_config,
}
}
}
#[async_trait]
impl Channel for VoiceWakeChannel {
fn name(&self) -> &str {
"voice_wake"
}
async fn send(&self, _message: &SendMessage) -> Result<()> {
// Voice wake is input-only; outbound messages are not supported.
Ok(())
}
async fn listen(&self, tx: mpsc::Sender<ChannelMessage>) -> Result<()> {
let config = self.config.clone();
let transcription_config = self.transcription_config.clone();
// ── Validate config ───────────────────────────────────
let energy_threshold = config.energy_threshold;
if !energy_threshold.is_finite() || energy_threshold <= 0.0 {
bail!("VoiceWake: energy_threshold must be a positive finite number, got {energy_threshold}");
}
if config.silence_timeout_ms < MIN_SILENCE_TIMEOUT_MS {
bail!(
"VoiceWake: silence_timeout_ms must be >= {MIN_SILENCE_TIMEOUT_MS}, got {}",
config.silence_timeout_ms
);
}
let max_capture_secs = config.max_capture_secs.min(MAX_CAPTURE_SECS_LIMIT);
if max_capture_secs != config.max_capture_secs {
warn!(
"VoiceWake: max_capture_secs clamped from {} to {MAX_CAPTURE_SECS_LIMIT}",
config.max_capture_secs
);
}
// Run the blocking audio capture loop on a dedicated thread.
let (audio_tx, mut audio_rx) = mpsc::channel::<Vec<f32>>(64);
let silence_timeout = Duration::from_millis(u64::from(config.silence_timeout_ms));
let max_capture = Duration::from_secs(u64::from(max_capture_secs));
let sample_rate: u32;
let channels_count: u16;
// ── Initialise cpal stream ────────────────────────────
// cpal::Stream is !Send, so we build and hold it on a dedicated thread.
// When the listen function exits, the shutdown oneshot is dropped,
// the thread exits, and the stream + microphone are released.
let (shutdown_tx, shutdown_rx) = tokio::sync::oneshot::channel::<()>();
let (init_tx, init_rx) = tokio::sync::oneshot::channel::<Result<(u32, u16)>>();
{
let audio_tx_clone = audio_tx.clone();
std::thread::spawn(move || {
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
let result = (|| -> Result<(u32, u16, cpal::Stream)> {
let host = cpal::default_host();
let device = host.default_input_device().ok_or_else(|| {
anyhow::anyhow!("No default audio input device available")
})?;
let supported = device.default_input_config()?;
let sr = supported.sample_rate().0;
let ch = supported.channels();
info!(
device = ?device.name().unwrap_or_default(),
sample_rate = sr,
channels = ch,
"VoiceWake: opening audio input"
);
let stream_config: cpal::StreamConfig = supported.into();
let stream = device.build_input_stream(
&stream_config,
move |data: &[f32], _: &cpal::InputCallbackInfo| {
let _ = audio_tx_clone.try_send(data.to_vec());
},
move |err| {
warn!("VoiceWake: audio stream error: {err}");
},
None,
)?;
stream.play()?;
Ok((sr, ch, stream))
})();
match result {
Ok((sr, ch, _stream)) => {
let _ = init_tx.send(Ok((sr, ch)));
// Hold the stream alive until shutdown is signalled.
let _ = shutdown_rx.blocking_recv();
debug!("VoiceWake: stream holder thread exiting");
}
Err(e) => {
let _ = init_tx.send(Err(e));
}
}
});
let (sr, ch) = init_rx
.await
.map_err(|_| anyhow::anyhow!("VoiceWake: stream init thread panicked"))??;
sample_rate = sr;
channels_count = ch;
}
// Drop the extra sender so the channel closes when the stream sender drops.
drop(audio_tx);
// ── Main detection loop ───────────────────────────────
let wake_word = config.wake_word.to_lowercase();
let mut state = WakeState::Listening;
let mut capture_buf: Vec<f32> = Vec::new();
let mut last_voice_at = Instant::now();
let mut capture_start = Instant::now();
let mut msg_counter: u64 = 0;
// Hard cap on capture buffer: max_capture_secs * sample_rate * channels * 2 (safety margin).
let max_buf_samples =
max_capture_secs as usize * sample_rate as usize * channels_count as usize * 2;
info!(wake_word = %wake_word, "VoiceWake: entering listen loop");
while let Some(chunk) = audio_rx.recv().await {
let energy = compute_rms_energy(&chunk);
match state {
WakeState::Listening => {
if energy >= energy_threshold {
debug!(
energy,
"VoiceWake: energy spike — transitioning to Triggered"
);
state = WakeState::Triggered;
capture_buf.clear();
capture_buf.extend_from_slice(&chunk);
last_voice_at = Instant::now();
capture_start = Instant::now();
}
}
WakeState::Triggered => {
if capture_buf.len() + chunk.len() <= max_buf_samples {
capture_buf.extend_from_slice(&chunk);
}
if energy >= energy_threshold {
last_voice_at = Instant::now();
}
let since_voice = last_voice_at.elapsed();
let since_start = capture_start.elapsed();
// After enough silence or max time, transcribe to check for wake word.
if since_voice >= silence_timeout || since_start >= max_capture {
debug!("VoiceWake: Triggered window closed — transcribing for wake word");
let wav_bytes =
encode_wav_from_f32(&capture_buf, sample_rate, channels_count);
match transcribe_audio(wav_bytes, "wake_check.wav", &transcription_config)
.await
{
Ok(text) => {
let lower = text.to_lowercase();
if lower.contains(&wake_word) {
info!(text = %text, "VoiceWake: wake word detected — capturing utterance");
state = WakeState::Capturing;
capture_buf.clear();
last_voice_at = Instant::now();
capture_start = Instant::now();
} else {
debug!(text = %text, "VoiceWake: no wake word — back to Listening");
state = WakeState::Listening;
capture_buf.clear();
}
}
Err(e) => {
warn!("VoiceWake: transcription error during wake check: {e}");
state = WakeState::Listening;
capture_buf.clear();
}
}
}
}
WakeState::Capturing => {
if capture_buf.len() + chunk.len() <= max_buf_samples {
capture_buf.extend_from_slice(&chunk);
}
if energy >= energy_threshold {
last_voice_at = Instant::now();
}
let since_voice = last_voice_at.elapsed();
let since_start = capture_start.elapsed();
if since_voice >= silence_timeout || since_start >= max_capture {
debug!("VoiceWake: utterance capture complete — transcribing");
let wav_bytes =
encode_wav_from_f32(&capture_buf, sample_rate, channels_count);
match transcribe_audio(wav_bytes, "utterance.wav", &transcription_config)
.await
{
Ok(text) => {
let trimmed = text.trim().to_string();
if !trimmed.is_empty() {
msg_counter += 1;
let ts = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let msg = ChannelMessage {
id: format!("voice_wake_{msg_counter}"),
sender: "voice_user".into(),
reply_target: "voice_user".into(),
content: trimmed,
channel: "voice_wake".into(),
timestamp: ts,
thread_ts: None,
interruption_scope_id: None,
};
if let Err(e) = tx.send(msg).await {
warn!("VoiceWake: failed to dispatch message: {e}");
}
}
}
Err(e) => {
warn!("VoiceWake: transcription error for utterance: {e}");
}
}
state = WakeState::Listening;
capture_buf.clear();
}
}
}
}
// Signal the stream holder thread to exit and release the microphone.
drop(shutdown_tx);
bail!("VoiceWake: audio stream ended unexpectedly");
}
}
// ── Audio utilities ────────────────────────────────────────────
/// Compute RMS (root-mean-square) energy of an audio chunk.
pub fn compute_rms_energy(samples: &[f32]) -> f32 {
if samples.is_empty() {
return 0.0;
}
let sum_sq: f32 = samples.iter().map(|s| s * s).sum();
(sum_sq / samples.len() as f32).sqrt()
}
/// Encode raw f32 PCM samples as a WAV byte buffer (16-bit PCM).
///
/// This produces a minimal valid WAV file that Whisper-compatible APIs accept.
pub fn encode_wav_from_f32(samples: &[f32], sample_rate: u32, channels: u16) -> Vec<u8> {
let bits_per_sample: u16 = 16;
let byte_rate = u32::from(channels) * sample_rate * u32::from(bits_per_sample) / 8;
let block_align = channels * bits_per_sample / 8;
// Guard against u32 overflow — reject buffers that exceed WAV's 4 GB limit.
let data_bytes = samples.len() * 2;
assert!(
u32::try_from(data_bytes).is_ok(),
"audio buffer too large for WAV encoding ({data_bytes} bytes)"
);
#[allow(clippy::cast_possible_truncation)]
let data_len = data_bytes as u32;
let file_len = 36 + data_len;
let mut buf = Vec::with_capacity(file_len as usize + 8);
// RIFF header
buf.extend_from_slice(b"RIFF");
buf.extend_from_slice(&file_len.to_le_bytes());
buf.extend_from_slice(b"WAVE");
// fmt chunk
buf.extend_from_slice(b"fmt ");
buf.extend_from_slice(&16u32.to_le_bytes()); // chunk size
buf.extend_from_slice(&1u16.to_le_bytes()); // PCM format
buf.extend_from_slice(&channels.to_le_bytes());
buf.extend_from_slice(&sample_rate.to_le_bytes());
buf.extend_from_slice(&byte_rate.to_le_bytes());
buf.extend_from_slice(&block_align.to_le_bytes());
buf.extend_from_slice(&bits_per_sample.to_le_bytes());
// data chunk
buf.extend_from_slice(b"data");
buf.extend_from_slice(&data_len.to_le_bytes());
for &sample in samples {
let clamped = sample.clamp(-1.0, 1.0);
#[allow(clippy::cast_possible_truncation)]
let pcm16 = (clamped * 32767.0) as i16; // clamped to [-1,1] so fits i16
buf.extend_from_slice(&pcm16.to_le_bytes());
}
buf
}
// ── Tests ──────────────────────────────────────────────────────
#[cfg(test)]
mod tests {
use super::*;
use crate::config::traits::ChannelConfig;
// ── State machine tests ────────────────────────────────
#[test]
fn wake_state_display() {
assert_eq!(WakeState::Listening.to_string(), "Listening");
assert_eq!(WakeState::Triggered.to_string(), "Triggered");
assert_eq!(WakeState::Capturing.to_string(), "Capturing");
}
#[test]
fn wake_state_equality() {
assert_eq!(WakeState::Listening, WakeState::Listening);
assert_ne!(WakeState::Listening, WakeState::Triggered);
}
// ── Energy computation tests ───────────────────────────
#[test]
fn rms_energy_of_silence_is_zero() {
let silence = vec![0.0f32; 1024];
assert_eq!(compute_rms_energy(&silence), 0.0);
}
#[test]
fn rms_energy_of_empty_is_zero() {
assert_eq!(compute_rms_energy(&[]), 0.0);
}
#[test]
fn rms_energy_of_constant_signal() {
// Constant signal at 0.5 → RMS should be 0.5
let signal = vec![0.5f32; 100];
let energy = compute_rms_energy(&signal);
assert!((energy - 0.5).abs() < 1e-5);
}
#[test]
fn rms_energy_above_threshold() {
let loud = vec![0.8f32; 256];
let energy = compute_rms_energy(&loud);
assert!(energy > 0.01, "Loud signal should exceed default threshold");
}
#[test]
fn rms_energy_below_threshold_for_quiet() {
let quiet = vec![0.001f32; 256];
let energy = compute_rms_energy(&quiet);
assert!(
energy < 0.01,
"Very quiet signal should be below default threshold"
);
}
// ── WAV encoding tests ─────────────────────────────────
#[test]
fn wav_header_is_valid() {
let samples = vec![0.0f32; 100];
let wav = encode_wav_from_f32(&samples, 16000, 1);
// RIFF header
assert_eq!(&wav[0..4], b"RIFF");
assert_eq!(&wav[8..12], b"WAVE");
// fmt chunk
assert_eq!(&wav[12..16], b"fmt ");
let fmt_size = u32::from_le_bytes(wav[16..20].try_into().unwrap());
assert_eq!(fmt_size, 16);
// PCM format
let format = u16::from_le_bytes(wav[20..22].try_into().unwrap());
assert_eq!(format, 1);
// Channels
let channels = u16::from_le_bytes(wav[22..24].try_into().unwrap());
assert_eq!(channels, 1);
// Sample rate
let sr = u32::from_le_bytes(wav[24..28].try_into().unwrap());
assert_eq!(sr, 16000);
// data chunk
assert_eq!(&wav[36..40], b"data");
let data_size = u32::from_le_bytes(wav[40..44].try_into().unwrap());
assert_eq!(data_size, 200); // 100 samples * 2 bytes each
}
#[test]
fn wav_total_size_correct() {
let samples = vec![0.0f32; 50];
let wav = encode_wav_from_f32(&samples, 44100, 2);
// header (44 bytes) + data (50 * 2 = 100 bytes)
assert_eq!(wav.len(), 144);
}
#[test]
fn wav_encodes_clipped_samples() {
// Samples outside [-1, 1] should be clamped
let samples = vec![-2.0f32, 2.0, 0.0];
let wav = encode_wav_from_f32(&samples, 16000, 1);
let s0 = i16::from_le_bytes(wav[44..46].try_into().unwrap());
let s1 = i16::from_le_bytes(wav[46..48].try_into().unwrap());
let s2 = i16::from_le_bytes(wav[48..50].try_into().unwrap());
assert_eq!(s0, -32767); // clamped to -1.0
assert_eq!(s1, 32767); // clamped to 1.0
assert_eq!(s2, 0);
}
// ── Config parsing tests ───────────────────────────────
#[test]
fn voice_wake_config_defaults() {
let config = VoiceWakeConfig::default();
assert_eq!(config.wake_word, "hey zeroclaw");
assert_eq!(config.silence_timeout_ms, 2000);
assert!((config.energy_threshold - 0.01).abs() < f32::EPSILON);
assert_eq!(config.max_capture_secs, 30);
}
#[test]
fn voice_wake_config_deserialize_partial() {
let toml_str = r#"
wake_word = "okay agent"
max_capture_secs = 60
"#;
let config: VoiceWakeConfig = toml::from_str(toml_str).unwrap();
assert_eq!(config.wake_word, "okay agent");
assert_eq!(config.max_capture_secs, 60);
// Defaults preserved for unset fields
assert_eq!(config.silence_timeout_ms, 2000);
assert!((config.energy_threshold - 0.01).abs() < f32::EPSILON);
}
#[test]
fn voice_wake_config_deserialize_all_fields() {
let toml_str = r#"
wake_word = "hello bot"
silence_timeout_ms = 3000
energy_threshold = 0.05
max_capture_secs = 15
"#;
let config: VoiceWakeConfig = toml::from_str(toml_str).unwrap();
assert_eq!(config.wake_word, "hello bot");
assert_eq!(config.silence_timeout_ms, 3000);
assert!((config.energy_threshold - 0.05).abs() < f32::EPSILON);
assert_eq!(config.max_capture_secs, 15);
}
#[test]
fn voice_wake_config_channel_config_trait() {
assert_eq!(VoiceWakeConfig::name(), "VoiceWake");
assert_eq!(VoiceWakeConfig::desc(), "voice wake word detection");
}
// ── State transition logic tests ───────────────────────
#[test]
fn energy_threshold_determines_trigger() {
let threshold = 0.01f32;
let quiet_energy = compute_rms_energy(&vec![0.005f32; 256]);
let loud_energy = compute_rms_energy(&vec![0.5f32; 256]);
assert!(quiet_energy < threshold, "Quiet should not trigger");
assert!(loud_energy >= threshold, "Loud should trigger");
}
#[test]
fn state_transitions_are_deterministic() {
// Verify that the state enum values are distinct and copyable
let states = [
WakeState::Listening,
WakeState::Triggered,
WakeState::Capturing,
];
for (i, a) in states.iter().enumerate() {
for (j, b) in states.iter().enumerate() {
if i == j {
assert_eq!(a, b);
} else {
assert_ne!(a, b);
}
}
}
}
#[test]
fn channel_config_impl() {
// VoiceWakeConfig implements ChannelConfig
assert_eq!(VoiceWakeConfig::name(), "VoiceWake");
assert!(!VoiceWakeConfig::desc().is_empty());
}
#[test]
fn voice_wake_channel_name() {
let config = VoiceWakeConfig::default();
let transcription_config = TranscriptionConfig::default();
let channel = VoiceWakeChannel::new(config, transcription_config);
assert_eq!(channel.name(), "voice_wake");
}
}
+16 -15
View File
@@ -15,21 +15,22 @@ pub use schema::{
ElevenLabsTtsConfig, EmbeddingRouteConfig, EstopConfig, FeishuConfig, GatewayConfig,
GoogleSttConfig, GoogleTtsConfig, GoogleWorkspaceAllowedOperation, GoogleWorkspaceConfig,
HardwareConfig, HardwareTransport, HeartbeatConfig, HooksConfig, HttpRequestConfig,
IMessageConfig, IdentityConfig, ImageProviderDalleConfig, ImageProviderFluxConfig,
ImageProviderImagenConfig, ImageProviderStabilityConfig, JiraConfig, KnowledgeConfig,
LarkConfig, LinkedInConfig, LinkedInContentConfig, LinkedInImageConfig, LocalWhisperConfig,
MatrixConfig, McpConfig, McpServerConfig, McpTransport, MemoryConfig, Microsoft365Config,
ModelRouteConfig, MultimodalConfig, NextcloudTalkConfig, NodeTransportConfig, NodesConfig,
NotionConfig, ObservabilityConfig, OpenAiSttConfig, OpenAiTtsConfig, OpenVpnTunnelConfig,
OtpConfig, OtpMethod, PacingConfig, PeripheralBoardConfig, PeripheralsConfig, PluginsConfig,
ProjectIntelConfig, ProxyConfig, ProxyScope, QdrantConfig, QueryClassificationConfig,
ReliabilityConfig, ResourceLimitsConfig, RuntimeConfig, SandboxBackend, SandboxConfig,
SchedulerConfig, SecretsConfig, SecurityConfig, SecurityOpsConfig, SkillCreationConfig,
SkillsConfig, SkillsPromptInjectionMode, SlackConfig, StorageConfig, StorageProviderConfig,
StorageProviderSection, StreamMode, SwarmConfig, SwarmStrategy, TelegramConfig,
TextBrowserConfig, ToolFilterGroup, ToolFilterGroupMode, TranscriptionConfig, TtsConfig,
TunnelConfig, VerifiableIntentConfig, WebFetchConfig, WebSearchConfig, WebhookConfig,
WhatsAppChatPolicy, WhatsAppWebMode, WorkspaceConfig, DEFAULT_GWS_SERVICES,
IMessageConfig, IdentityConfig, ImageGenConfig, ImageProviderDalleConfig,
ImageProviderFluxConfig, ImageProviderImagenConfig, ImageProviderStabilityConfig, JiraConfig,
KnowledgeConfig, LarkConfig, LinkedInConfig, LinkedInContentConfig, LinkedInImageConfig,
LocalWhisperConfig, MatrixConfig, McpConfig, McpServerConfig, McpTransport, MemoryConfig,
Microsoft365Config, ModelRouteConfig, MultimodalConfig, NextcloudTalkConfig,
NodeTransportConfig, NodesConfig, NotionConfig, ObservabilityConfig, OpenAiSttConfig,
OpenAiTtsConfig, OpenVpnTunnelConfig, OtpConfig, OtpMethod, PacingConfig,
PeripheralBoardConfig, PeripheralsConfig, PluginsConfig, ProjectIntelConfig, ProxyConfig,
ProxyScope, QdrantConfig, QueryClassificationConfig, ReliabilityConfig, ResourceLimitsConfig,
RuntimeConfig, SandboxBackend, SandboxConfig, SchedulerConfig, SecretsConfig, SecurityConfig,
SecurityOpsConfig, SkillCreationConfig, SkillsConfig, SkillsPromptInjectionMode, SlackConfig,
StorageConfig, StorageProviderConfig, StorageProviderSection, StreamMode, SwarmConfig,
SwarmStrategy, TelegramConfig, TextBrowserConfig, ToolFilterGroup, ToolFilterGroupMode,
TranscriptionConfig, TtsConfig, TunnelConfig, VerifiableIntentConfig, WebFetchConfig,
WebSearchConfig, WebhookConfig, WhatsAppChatPolicy, WhatsAppWebMode, WorkspaceConfig,
DEFAULT_GWS_SERVICES,
};
pub fn name_and_presence<T: traits::ChannelConfig>(channel: Option<&T>) -> (&'static str, bool) {
+212 -8
View File
@@ -357,6 +357,10 @@ pub struct Config {
#[serde(default)]
pub linkedin: LinkedInConfig,
/// Standalone image generation tool configuration (`[image_gen]`).
#[serde(default)]
pub image_gen: ImageGenConfig,
/// Plugin system configuration (`[plugins]`).
#[serde(default)]
pub plugins: PluginsConfig,
@@ -1248,6 +1252,12 @@ pub struct AgentConfig {
/// Default: `[]` (no filtering — all tools included).
#[serde(default)]
pub tool_filter_groups: Vec<ToolFilterGroup>,
/// Maximum characters for the assembled system prompt. When `> 0`, the prompt
/// is truncated to this limit after assembly (keeping the top portion which
/// contains identity and safety instructions). `0` means unlimited.
/// Useful for small-context models (e.g. glm-4.5-air ~8K tokens → set to 8000).
#[serde(default = "default_max_system_prompt_chars")]
pub max_system_prompt_chars: usize,
}
fn default_agent_max_tool_iterations() -> usize {
@@ -1266,6 +1276,10 @@ fn default_agent_tool_dispatcher() -> String {
"auto".into()
}
fn default_max_system_prompt_chars() -> usize {
0
}
impl Default for AgentConfig {
fn default() -> Self {
Self {
@@ -1277,6 +1291,7 @@ impl Default for AgentConfig {
tool_dispatcher: default_agent_tool_dispatcher(),
tool_call_dedup_exempt: Vec::new(),
tool_filter_groups: Vec::new(),
max_system_prompt_chars: default_max_system_prompt_chars(),
}
}
}
@@ -2963,6 +2978,46 @@ impl Default for ImageProviderFluxConfig {
}
}
// ── Standalone Image Generation ─────────────────────────────────
/// Standalone image generation tool configuration (`[image_gen]`).
///
/// When enabled, registers an `image_gen` tool that generates images via
/// fal.ai's synchronous API (Flux / Nano Banana models) and saves them
/// to the workspace `images/` directory.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct ImageGenConfig {
/// Enable the standalone image generation tool. Default: false.
#[serde(default)]
pub enabled: bool,
/// Default fal.ai model identifier.
#[serde(default = "default_image_gen_model")]
pub default_model: String,
/// Environment variable name holding the fal.ai API key.
#[serde(default = "default_image_gen_api_key_env")]
pub api_key_env: String,
}
fn default_image_gen_model() -> String {
"fal-ai/flux/schnell".into()
}
fn default_image_gen_api_key_env() -> String {
"FAL_API_KEY".into()
}
impl Default for ImageGenConfig {
fn default() -> Self {
Self {
enabled: false,
default_model: default_image_gen_model(),
api_key_env: default_image_gen_api_key_env(),
}
}
}
// ── Claude Code ─────────────────────────────────────────────────
/// Claude Code CLI tool configuration (`[claude_code]` section).
@@ -4867,6 +4922,8 @@ pub struct ChannelsConfig {
pub telegram: Option<TelegramConfig>,
/// Discord bot channel configuration.
pub discord: Option<DiscordConfig>,
/// Discord history channel — logs ALL messages and forwards @mentions to agent.
pub discord_history: Option<DiscordHistoryConfig>,
/// Slack bot channel configuration.
pub slack: Option<SlackConfig>,
/// Mattermost bot channel configuration.
@@ -4889,6 +4946,8 @@ pub struct ChannelsConfig {
pub nextcloud_talk: Option<NextcloudTalkConfig>,
/// Email channel configuration.
pub email: Option<crate::channels::email_channel::EmailConfig>,
/// Gmail Pub/Sub push notification channel configuration.
pub gmail_push: Option<crate::channels::gmail_push::GmailPushConfig>,
/// IRC channel configuration.
pub irc: Option<IrcConfig>,
/// Lark channel configuration.
@@ -4913,6 +4972,9 @@ pub struct ChannelsConfig {
pub reddit: Option<RedditConfig>,
/// Bluesky channel configuration (AT Protocol).
pub bluesky: Option<BlueskyConfig>,
/// Voice wake word detection channel configuration.
#[cfg(feature = "voice-wake")]
pub voice_wake: Option<VoiceWakeConfig>,
/// Base timeout in seconds for processing a single channel message (LLM + tools).
/// Runtime uses this as a per-turn budget that scales with tool-loop depth
/// (up to 4x, capped) so one slow/retried model call does not consume the
@@ -4995,6 +5057,10 @@ impl ChannelsConfig {
Box::new(ConfigWrapper::new(self.email.as_ref())),
self.email.is_some(),
),
(
Box::new(ConfigWrapper::new(self.gmail_push.as_ref())),
self.gmail_push.is_some(),
),
(
Box::new(ConfigWrapper::new(self.irc.as_ref())),
self.irc.is_some()
@@ -5036,6 +5102,11 @@ impl ChannelsConfig {
Box::new(ConfigWrapper::new(self.bluesky.as_ref())),
self.bluesky.is_some(),
),
#[cfg(feature = "voice-wake")]
(
Box::new(ConfigWrapper::new(self.voice_wake.as_ref())),
self.voice_wake.is_some(),
),
]
}
@@ -5063,6 +5134,7 @@ impl Default for ChannelsConfig {
cli: true,
telegram: None,
discord: None,
discord_history: None,
slack: None,
mattermost: None,
webhook: None,
@@ -5074,6 +5146,7 @@ impl Default for ChannelsConfig {
wati: None,
nextcloud_talk: None,
email: None,
gmail_push: None,
irc: None,
lark: None,
feishu: None,
@@ -5087,6 +5160,8 @@ impl Default for ChannelsConfig {
clawdtalk: None,
reddit: None,
bluesky: None,
#[cfg(feature = "voice-wake")]
voice_wake: None,
message_timeout_secs: default_channel_message_timeout_secs(),
ack_reactions: true,
show_tool_calls: false,
@@ -5190,6 +5265,39 @@ impl ChannelConfig for DiscordConfig {
}
}
/// Discord history channel — logs ALL messages to discord.db and forwards @mentions to the agent.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct DiscordHistoryConfig {
/// Discord bot token (from Discord Developer Portal).
pub bot_token: String,
/// Optional guild (server) ID to restrict logging to a single guild.
pub guild_id: Option<String>,
/// Allowed Discord user IDs. Empty = allow all (open logging).
#[serde(default)]
pub allowed_users: Vec<String>,
/// Discord channel IDs to watch. Empty = watch all channels.
#[serde(default)]
pub channel_ids: Vec<String>,
/// When true (default), store Direct Messages in discord.db.
#[serde(default = "default_true")]
pub store_dms: bool,
/// When true (default), respond to @mentions in Direct Messages.
#[serde(default = "default_true")]
pub respond_to_dms: bool,
/// Per-channel proxy URL (http, https, socks5, socks5h).
#[serde(default)]
pub proxy_url: Option<String>,
}
impl ChannelConfig for DiscordHistoryConfig {
fn name() -> &'static str {
"Discord History"
}
fn desc() -> &'static str {
"log all messages and forward @mentions"
}
}
/// Slack bot channel configuration.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct SlackConfig {
@@ -6338,6 +6446,74 @@ impl ChannelConfig for BlueskyConfig {
}
}
/// Voice wake word detection channel configuration.
///
/// Listens on the default microphone for a configurable wake word,
/// then captures the following utterance and transcribes it via the
/// existing transcription API.
#[cfg(feature = "voice-wake")]
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct VoiceWakeConfig {
/// Wake word phrase to listen for (case-insensitive substring match).
/// Default: `"hey zeroclaw"`.
#[serde(default = "default_voice_wake_word")]
pub wake_word: String,
/// Silence timeout in milliseconds — how long to wait after the last
/// energy spike before finalizing a capture window. Default: `2000`.
#[serde(default = "default_voice_wake_silence_timeout_ms")]
pub silence_timeout_ms: u32,
/// RMS energy threshold for voice activity detection. Samples below
/// this level are treated as silence. Default: `0.01`.
#[serde(default = "default_voice_wake_energy_threshold")]
pub energy_threshold: f32,
/// Maximum capture duration in seconds before forcing transcription.
/// Default: `30`.
#[serde(default = "default_voice_wake_max_capture_secs")]
pub max_capture_secs: u32,
}
#[cfg(feature = "voice-wake")]
fn default_voice_wake_word() -> String {
"hey zeroclaw".into()
}
#[cfg(feature = "voice-wake")]
fn default_voice_wake_silence_timeout_ms() -> u32 {
2000
}
#[cfg(feature = "voice-wake")]
fn default_voice_wake_energy_threshold() -> f32 {
0.01
}
#[cfg(feature = "voice-wake")]
fn default_voice_wake_max_capture_secs() -> u32 {
30
}
#[cfg(feature = "voice-wake")]
impl Default for VoiceWakeConfig {
fn default() -> Self {
Self {
wake_word: default_voice_wake_word(),
silence_timeout_ms: default_voice_wake_silence_timeout_ms(),
energy_threshold: default_voice_wake_energy_threshold(),
max_capture_secs: default_voice_wake_max_capture_secs(),
}
}
}
#[cfg(feature = "voice-wake")]
impl ChannelConfig for VoiceWakeConfig {
fn name() -> &'static str {
"VoiceWake"
}
fn desc() -> &'static str {
"voice wake word detection"
}
}
/// Nostr channel configuration (NIP-04 + NIP-17 private messages)
#[cfg(feature = "channel-nostr")]
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
@@ -6811,6 +6987,7 @@ impl Default for Config {
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
image_gen: ImageGenConfig::default(),
plugins: PluginsConfig::default(),
locale: None,
verifiable_intent: VerifiableIntentConfig::default(),
@@ -7607,6 +7784,13 @@ impl Config {
"config.channels_config.email.password",
)?;
}
if let Some(ref mut gp) = config.channels_config.gmail_push {
decrypt_secret(
&store,
&mut gp.oauth_token,
"config.channels_config.gmail_push.oauth_token",
)?;
}
if let Some(ref mut irc) = config.channels_config.irc {
decrypt_optional_secret(
&store,
@@ -9038,6 +9222,13 @@ impl Config {
"config.channels_config.email.password",
)?;
}
if let Some(ref mut gp) = config_to_save.channels_config.gmail_push {
encrypt_secret(
&store,
&mut gp.oauth_token,
"config.channels_config.gmail_push.oauth_token",
)?;
}
if let Some(ref mut irc) = config_to_save.channels_config.irc {
encrypt_optional_secret(
&store,
@@ -9275,7 +9466,6 @@ mod tests {
use std::os::unix::fs::PermissionsExt;
use std::path::PathBuf;
use std::sync::{Arc, Mutex as StdMutex};
#[cfg(unix)]
use tempfile::TempDir;
use tokio::sync::{Mutex, MutexGuard};
use tokio::test;
@@ -9666,6 +9856,7 @@ default_temperature = 0.7
proxy_url: None,
}),
discord: None,
discord_history: None,
slack: None,
mattermost: None,
webhook: None,
@@ -9677,6 +9868,7 @@ default_temperature = 0.7
wati: None,
nextcloud_talk: None,
email: None,
gmail_push: None,
irc: None,
lark: None,
feishu: None,
@@ -9690,6 +9882,8 @@ default_temperature = 0.7
clawdtalk: None,
reddit: None,
bluesky: None,
#[cfg(feature = "voice-wake")]
voice_wake: None,
message_timeout_secs: 300,
ack_reactions: true,
show_tool_calls: true,
@@ -9734,6 +9928,7 @@ default_temperature = 0.7
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
image_gen: ImageGenConfig::default(),
plugins: PluginsConfig::default(),
locale: None,
verifiable_intent: VerifiableIntentConfig::default(),
@@ -10115,6 +10310,7 @@ default_temperature = 0.7
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
image_gen: ImageGenConfig::default(),
plugins: PluginsConfig::default(),
locale: None,
verifiable_intent: VerifiableIntentConfig::default(),
@@ -10501,6 +10697,7 @@ allowed_users = ["@ops:matrix.org"]
cli: true,
telegram: None,
discord: None,
discord_history: None,
slack: None,
mattermost: None,
webhook: None,
@@ -10522,6 +10719,7 @@ allowed_users = ["@ops:matrix.org"]
wati: None,
nextcloud_talk: None,
email: None,
gmail_push: None,
irc: None,
lark: None,
feishu: None,
@@ -10534,6 +10732,8 @@ allowed_users = ["@ops:matrix.org"]
clawdtalk: None,
reddit: None,
bluesky: None,
#[cfg(feature = "voice-wake")]
voice_wake: None,
message_timeout_secs: 300,
ack_reactions: true,
show_tool_calls: true,
@@ -10816,6 +11016,7 @@ channel_id = "C123"
cli: true,
telegram: None,
discord: None,
discord_history: None,
slack: None,
mattermost: None,
webhook: None,
@@ -10841,6 +11042,7 @@ channel_id = "C123"
wati: None,
nextcloud_talk: None,
email: None,
gmail_push: None,
irc: None,
lark: None,
feishu: None,
@@ -10853,6 +11055,8 @@ channel_id = "C123"
clawdtalk: None,
reddit: None,
bluesky: None,
#[cfg(feature = "voice-wake")]
voice_wake: None,
message_timeout_secs: 300,
ack_reactions: true,
show_tool_calls: true,
@@ -13671,12 +13875,12 @@ require_otp_to_resume = true
async fn ensure_bootstrap_files_creates_missing_files() {
let tmp = TempDir::new().unwrap();
let ws = tmp.path().join("workspace");
tokio::fs::create_dir_all(&ws).await.unwrap();
let _: () = tokio::fs::create_dir_all(&ws).await.unwrap();
ensure_bootstrap_files(&ws).await.unwrap();
let soul = tokio::fs::read_to_string(ws.join("SOUL.md")).await.unwrap();
let identity = tokio::fs::read_to_string(ws.join("IDENTITY.md"))
let soul: String = tokio::fs::read_to_string(ws.join("SOUL.md")).await.unwrap();
let identity: String = tokio::fs::read_to_string(ws.join("IDENTITY.md"))
.await
.unwrap();
assert!(soul.contains("SOUL.md"));
@@ -13687,21 +13891,21 @@ require_otp_to_resume = true
async fn ensure_bootstrap_files_does_not_overwrite_existing() {
let tmp = TempDir::new().unwrap();
let ws = tmp.path().join("workspace");
tokio::fs::create_dir_all(&ws).await.unwrap();
let _: () = tokio::fs::create_dir_all(&ws).await.unwrap();
let custom = "# My custom SOUL";
tokio::fs::write(ws.join("SOUL.md"), custom).await.unwrap();
let _: () = tokio::fs::write(ws.join("SOUL.md"), custom).await.unwrap();
ensure_bootstrap_files(&ws).await.unwrap();
let soul = tokio::fs::read_to_string(ws.join("SOUL.md")).await.unwrap();
let soul: String = tokio::fs::read_to_string(ws.join("SOUL.md")).await.unwrap();
assert_eq!(
soul, custom,
"ensure_bootstrap_files must not overwrite existing files"
);
// IDENTITY.md should still be created since it was missing
let identity = tokio::fs::read_to_string(ws.join("IDENTITY.md"))
let identity: String = tokio::fs::read_to_string(ws.join("IDENTITY.md"))
.await
.unwrap();
assert!(identity.contains("IDENTITY.md"));
+3 -1
View File
@@ -23,7 +23,7 @@ fn extract_bearer_token(headers: &HeaderMap) -> Option<&str> {
}
/// Verify bearer token against PairingGuard. Returns error response if unauthorized.
fn require_auth(
pub(super) fn require_auth(
state: &AppState,
headers: &HeaderMap,
) -> Result<(), (StatusCode, Json<serde_json::Value>)> {
@@ -1429,6 +1429,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -1439,6 +1440,7 @@ mod tests {
device_registry: None,
pending_pairings: None,
path_prefix: String::new(),
canvas_store: crate::tools::canvas::CanvasStore::new(),
}
}
+278
View File
@@ -0,0 +1,278 @@
//! Live Canvas gateway routes — REST + WebSocket for real-time canvas updates.
//!
//! - `GET /api/canvas/:id` — get current canvas content (JSON)
//! - `POST /api/canvas/:id` — push content programmatically
//! - `GET /api/canvas` — list all active canvases
//! - `WS /ws/canvas/:id` — real-time canvas updates via WebSocket
use super::api::require_auth;
use super::AppState;
use axum::{
extract::{
ws::{Message, WebSocket},
Path, State, WebSocketUpgrade,
},
http::{header, HeaderMap, StatusCode},
response::{IntoResponse, Json},
};
use futures_util::{SinkExt, StreamExt};
use serde::Deserialize;
/// POST /api/canvas/:id request body.
#[derive(Deserialize)]
pub struct CanvasPostBody {
pub content_type: Option<String>,
pub content: String,
}
/// GET /api/canvas — list all active canvases.
pub async fn handle_canvas_list(
State(state): State<AppState>,
headers: HeaderMap,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let ids = state.canvas_store.list();
Json(serde_json::json!({ "canvases": ids })).into_response()
}
/// GET /api/canvas/:id — get current canvas content.
pub async fn handle_canvas_get(
State(state): State<AppState>,
headers: HeaderMap,
Path(id): Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
match state.canvas_store.snapshot(&id) {
Some(frame) => Json(serde_json::json!({
"canvas_id": id,
"frame": frame,
}))
.into_response(),
None => (
StatusCode::NOT_FOUND,
Json(serde_json::json!({ "error": format!("Canvas '{}' not found", id) })),
)
.into_response(),
}
}
/// GET /api/canvas/:id/history — get canvas frame history.
pub async fn handle_canvas_history(
State(state): State<AppState>,
headers: HeaderMap,
Path(id): Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let history = state.canvas_store.history(&id);
Json(serde_json::json!({
"canvas_id": id,
"frames": history,
}))
.into_response()
}
/// POST /api/canvas/:id — push content to a canvas.
pub async fn handle_canvas_post(
State(state): State<AppState>,
headers: HeaderMap,
Path(id): Path<String>,
Json(body): Json<CanvasPostBody>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
let content_type = body.content_type.as_deref().unwrap_or("html");
// Validate content_type against allowed set (prevent injecting "eval" frames via REST).
if !crate::tools::canvas::ALLOWED_CONTENT_TYPES.contains(&content_type) {
return (
StatusCode::BAD_REQUEST,
Json(serde_json::json!({
"error": format!(
"Invalid content_type '{}'. Allowed: {:?}",
content_type,
crate::tools::canvas::ALLOWED_CONTENT_TYPES
)
})),
)
.into_response();
}
// Enforce content size limit (same as tool-side validation).
if body.content.len() > crate::tools::canvas::MAX_CONTENT_SIZE {
return (
StatusCode::PAYLOAD_TOO_LARGE,
Json(serde_json::json!({
"error": format!(
"Content exceeds maximum size of {} bytes",
crate::tools::canvas::MAX_CONTENT_SIZE
)
})),
)
.into_response();
}
match state.canvas_store.render(&id, content_type, &body.content) {
Some(frame) => (
StatusCode::CREATED,
Json(serde_json::json!({
"canvas_id": id,
"frame": frame,
})),
)
.into_response(),
None => (
StatusCode::TOO_MANY_REQUESTS,
Json(serde_json::json!({
"error": "Maximum canvas count reached. Clear unused canvases first."
})),
)
.into_response(),
}
}
/// DELETE /api/canvas/:id — clear a canvas.
pub async fn handle_canvas_clear(
State(state): State<AppState>,
headers: HeaderMap,
Path(id): Path<String>,
) -> impl IntoResponse {
if let Err(e) = require_auth(&state, &headers) {
return e.into_response();
}
state.canvas_store.clear(&id);
Json(serde_json::json!({
"canvas_id": id,
"status": "cleared",
}))
.into_response()
}
/// WS /ws/canvas/:id — real-time canvas updates.
pub async fn handle_ws_canvas(
State(state): State<AppState>,
Path(id): Path<String>,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> impl IntoResponse {
// Auth check (same pattern as ws::handle_ws_chat)
if state.pairing.require_pairing() {
let token = headers
.get(header::AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|auth| auth.strip_prefix("Bearer "))
.or_else(|| {
// Fallback: check query params in the upgrade request URI
headers
.get("sec-websocket-protocol")
.and_then(|v| v.to_str().ok())
.and_then(|protos| {
protos
.split(',')
.map(|p| p.trim())
.find_map(|p| p.strip_prefix("bearer."))
})
})
.unwrap_or("");
if !state.pairing.is_authenticated(token) {
return (
StatusCode::UNAUTHORIZED,
"Unauthorized — provide Authorization header or Sec-WebSocket-Protocol bearer",
)
.into_response();
}
}
ws.on_upgrade(move |socket| handle_canvas_socket(socket, state, id))
.into_response()
}
async fn handle_canvas_socket(socket: WebSocket, state: AppState, canvas_id: String) {
let (mut sender, mut receiver) = socket.split();
// Subscribe to canvas updates
let mut rx = match state.canvas_store.subscribe(&canvas_id) {
Some(rx) => rx,
None => {
let msg = serde_json::json!({
"type": "error",
"error": "Maximum canvas count reached",
});
let _ = sender.send(Message::Text(msg.to_string().into())).await;
return;
}
};
// Send current state immediately if available
if let Some(frame) = state.canvas_store.snapshot(&canvas_id) {
let msg = serde_json::json!({
"type": "frame",
"canvas_id": canvas_id,
"frame": frame,
});
let _ = sender.send(Message::Text(msg.to_string().into())).await;
}
// Send a connected acknowledgement
let ack = serde_json::json!({
"type": "connected",
"canvas_id": canvas_id,
});
let _ = sender.send(Message::Text(ack.to_string().into())).await;
// Spawn a task that forwards broadcast updates to the WebSocket
let canvas_id_clone = canvas_id.clone();
let send_task = tokio::spawn(async move {
loop {
match rx.recv().await {
Ok(frame) => {
let msg = serde_json::json!({
"type": "frame",
"canvas_id": canvas_id_clone,
"frame": frame,
});
if sender
.send(Message::Text(msg.to_string().into()))
.await
.is_err()
{
break;
}
}
Err(tokio::sync::broadcast::error::RecvError::Lagged(n)) => {
// Client fell behind — notify and continue rather than disconnecting.
let msg = serde_json::json!({
"type": "lagged",
"canvas_id": canvas_id_clone,
"missed_frames": n,
});
let _ = sender.send(Message::Text(msg.to_string().into())).await;
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => break,
}
}
});
// Read loop: we mostly ignore incoming messages but handle close/ping
while let Some(msg) = receiver.next().await {
match msg {
Ok(Message::Close(_)) | Err(_) => break,
_ => {} // Ignore all other messages (pings are handled by axum)
}
}
// Abort the send task when the connection is closed
send_task.abort();
}
+138 -17
View File
@@ -11,14 +11,15 @@ pub mod api;
pub mod api_pairing;
#[cfg(feature = "plugins-wasm")]
pub mod api_plugins;
pub mod canvas;
pub mod nodes;
pub mod sse;
pub mod static_files;
pub mod ws;
use crate::channels::{
session_backend::SessionBackend, session_sqlite::SqliteSessionBackend, Channel, LinqChannel,
NextcloudTalkChannel, SendMessage, WatiChannel, WhatsAppChannel,
session_backend::SessionBackend, session_sqlite::SqliteSessionBackend, Channel,
GmailPushChannel, LinqChannel, NextcloudTalkChannel, SendMessage, WatiChannel, WhatsAppChannel,
};
use crate::config::Config;
use crate::cost::CostTracker;
@@ -28,6 +29,7 @@ use crate::runtime;
use crate::security::pairing::{constant_time_eq, is_public_bind, PairingGuard};
use crate::security::SecurityPolicy;
use crate::tools;
use crate::tools::canvas::CanvasStore;
use crate::tools::traits::ToolSpec;
use crate::util::truncate_with_ellipsis;
use anyhow::{Context, Result};
@@ -336,6 +338,8 @@ pub struct AppState {
/// Nextcloud Talk webhook secret for signature verification
pub nextcloud_talk_webhook_secret: Option<Arc<str>>,
pub wati: Option<Arc<WatiChannel>>,
/// Gmail Pub/Sub push notification channel
pub gmail_push: Option<Arc<GmailPushChannel>>,
/// Observability backend for metrics scraping
pub observer: Arc<dyn crate::observability::Observer>,
/// Registered tool specs (for web dashboard tools page)
@@ -356,6 +360,8 @@ pub struct AppState {
pub device_registry: Option<Arc<api_pairing::DeviceRegistry>>,
/// Pending pairing request store
pub pending_pairings: Option<Arc<api_pairing::PairingStore>>,
/// Shared canvas store for Live Canvas (A2UI) system
pub canvas_store: CanvasStore,
}
/// Run the HTTP gateway using axum with proper HTTP/1.1 compliance.
@@ -432,21 +438,25 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
(None, None)
};
let (mut tools_registry_raw, delegate_handle_gw) = tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
Arc::clone(&mem),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
);
let canvas_store = tools::CanvasStore::new();
let (mut tools_registry_raw, delegate_handle_gw, _reaction_handle_gw) =
tools::all_tools_with_runtime(
Arc::new(config.clone()),
&security,
runtime,
Arc::clone(&mem),
composio_key,
composio_entity_id,
&config.browser,
&config.http_request,
&config.web_fetch,
&config.workspace_dir,
&config.agents,
config.api_key.as_deref(),
&config,
Some(canvas_store.clone()),
);
// ── Wire MCP tools into the gateway tool registry (non-fatal) ───
// Without this, the `/api/tools` endpoint misses MCP tools.
@@ -629,6 +639,14 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
})
.map(Arc::from);
// Gmail Push channel (if configured and enabled)
let gmail_push_channel: Option<Arc<GmailPushChannel>> = config
.channels_config
.gmail_push
.as_ref()
.filter(|gp| gp.enabled)
.map(|gp| Arc::new(GmailPushChannel::new(gp.clone())));
// ── Session persistence for WS chat ─────────────────────
let session_backend: Option<Arc<dyn SessionBackend>> = if config.gateway.session_persistence {
match SqliteSessionBackend::new(&config.workspace_dir) {
@@ -800,6 +818,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
nextcloud_talk: nextcloud_talk_channel,
nextcloud_talk_webhook_secret,
wati: wati_channel,
gmail_push: gmail_push_channel,
observer: broadcast_observer,
tools_registry,
cost_tracker,
@@ -810,6 +829,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
device_registry,
pending_pairings,
path_prefix: path_prefix.unwrap_or("").to_string(),
canvas_store,
};
// Config PUT needs larger body limit (1MB)
@@ -834,6 +854,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route("/wati", get(handle_wati_verify))
.route("/wati", post(handle_wati_webhook))
.route("/nextcloud-talk", post(handle_nextcloud_talk_webhook))
.route("/webhook/gmail", post(handle_gmail_push_webhook))
// ── Web Dashboard API routes ──
.route("/api/status", get(api::handle_api_status))
.route("/api/config", get(api::handle_api_config_get))
@@ -874,6 +895,18 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route(
"/api/devices/{id}/token/rotate",
post(api_pairing::rotate_token),
)
// ── Live Canvas (A2UI) routes ──
.route("/api/canvas", get(canvas::handle_canvas_list))
.route(
"/api/canvas/{id}",
get(canvas::handle_canvas_get)
.post(canvas::handle_canvas_post)
.delete(canvas::handle_canvas_clear),
)
.route(
"/api/canvas/{id}/history",
get(canvas::handle_canvas_history),
);
// ── Plugin management API (requires plugins-wasm feature) ──
@@ -888,6 +921,8 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
.route("/api/events", get(sse::handle_sse_events))
// ── WebSocket agent chat ──
.route("/ws/chat", get(ws::handle_ws_chat))
// ── WebSocket canvas updates ──
.route("/ws/canvas/{id}", get(canvas::handle_ws_canvas))
// ── WebSocket node discovery ──
.route("/ws/nodes", get(nodes::handle_ws_nodes))
// ── Static assets (web dashboard) ──
@@ -1812,6 +1847,74 @@ async fn handle_nextcloud_talk_webhook(
(StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
}
/// Maximum request body size for the Gmail webhook endpoint (1 MB).
/// Google Pub/Sub messages are typically under 10 KB.
const GMAIL_WEBHOOK_MAX_BODY: usize = 1024 * 1024;
/// POST /webhook/gmail — incoming Gmail Pub/Sub push notification
async fn handle_gmail_push_webhook(
State(state): State<AppState>,
headers: HeaderMap,
body: Bytes,
) -> impl IntoResponse {
let Some(ref gmail_push) = state.gmail_push else {
return (
StatusCode::NOT_FOUND,
Json(serde_json::json!({"error": "Gmail push not configured"})),
);
};
// Enforce body size limit.
if body.len() > GMAIL_WEBHOOK_MAX_BODY {
return (
StatusCode::PAYLOAD_TOO_LARGE,
Json(serde_json::json!({"error": "Request body too large"})),
);
}
// Authenticate the webhook request using a shared secret.
let secret = gmail_push.resolve_webhook_secret();
if !secret.is_empty() {
let provided = headers
.get(axum::http::header::AUTHORIZATION)
.and_then(|v| v.to_str().ok())
.and_then(|auth| auth.strip_prefix("Bearer "))
.unwrap_or("");
if provided != secret {
tracing::warn!("Gmail push webhook: unauthorized request");
return (
StatusCode::UNAUTHORIZED,
Json(serde_json::json!({"error": "Unauthorized"})),
);
}
}
let body_str = String::from_utf8_lossy(&body);
let envelope: crate::channels::gmail_push::PubSubEnvelope =
match serde_json::from_str(&body_str) {
Ok(e) => e,
Err(e) => {
tracing::warn!("Gmail push webhook: invalid payload: {e}");
return (
StatusCode::BAD_REQUEST,
Json(serde_json::json!({"error": "Invalid Pub/Sub envelope"})),
);
}
};
// Process the notification asynchronously (non-blocking for the webhook response)
let channel = Arc::clone(gmail_push);
tokio::spawn(async move {
if let Err(e) = channel.handle_notification(&envelope).await {
tracing::error!("Gmail push notification processing failed: {e:#}");
}
});
// Acknowledge immediately — Google Pub/Sub requires a 2xx within ~10s
(StatusCode::OK, Json(serde_json::json!({"status": "ok"})))
}
// ══════════════════════════════════════════════════════════════════════════════
// ADMIN HANDLERS (for CLI management)
// ══════════════════════════════════════════════════════════════════════════════
@@ -2000,6 +2103,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2010,6 +2114,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let response = handle_metrics(State(state)).await.into_response();
@@ -2056,6 +2161,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer,
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2066,6 +2172,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let response = handle_metrics(State(state)).await.into_response();
@@ -2441,6 +2548,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2451,6 +2559,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let mut headers = HeaderMap::new();
@@ -2511,6 +2620,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2521,6 +2631,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let headers = HeaderMap::new();
@@ -2593,6 +2704,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2603,6 +2715,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let response = handle_webhook(
@@ -2647,6 +2760,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2657,6 +2771,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let mut headers = HeaderMap::new();
@@ -2706,6 +2821,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2716,6 +2832,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let mut headers = HeaderMap::new();
@@ -2770,6 +2887,7 @@ mod tests {
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2780,6 +2898,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let response = Box::pin(handle_nextcloud_talk_webhook(
@@ -2830,6 +2949,7 @@ mod tests {
nextcloud_talk: Some(channel),
nextcloud_talk_webhook_secret: Some(Arc::from(secret)),
wati: None,
gmail_push: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
@@ -2840,6 +2960,7 @@ mod tests {
session_backend: None,
device_registry: None,
pending_pairings: None,
canvas_store: CanvasStore::new(),
};
let mut headers = HeaderMap::new();
+5 -1
View File
@@ -407,7 +407,11 @@ mod tests {
// Simpler: write a temp script.
let dir = tempfile::tempdir().unwrap();
let script_path = dir.path().join("tool.sh");
std::fs::write(&script_path, format!("#!/bin/sh\necho '{}'\n", result_json)).unwrap();
std::fs::write(
&script_path,
format!("#!/bin/sh\ncat > /dev/null\necho '{}'\n", result_json),
)
.unwrap();
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
+25
View File
@@ -46,6 +46,31 @@ impl SqliteMemory {
)
}
/// Like `new`, but stores data in `{db_name}.db` instead of `brain.db`.
pub fn new_named(workspace_dir: &Path, db_name: &str) -> anyhow::Result<Self> {
let db_path = workspace_dir.join("memory").join(format!("{db_name}.db"));
if let Some(parent) = db_path.parent() {
std::fs::create_dir_all(parent)?;
}
let conn = Self::open_connection(&db_path, None)?;
conn.execute_batch(
"PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA mmap_size = 8388608;
PRAGMA cache_size = -2000;
PRAGMA temp_store = MEMORY;",
)?;
Self::init_schema(&conn)?;
Ok(Self {
conn: Arc::new(Mutex::new(conn)),
db_path,
embedder: Arc::new(super::embeddings::NoopEmbedding),
vector_weight: 0.7,
keyword_weight: 0.3,
cache_max: 10_000,
})
}
/// Build SQLite memory with optional open timeout.
///
/// If `open_timeout_secs` is `Some(n)`, opening the database is limited to `n` seconds
+2
View File
@@ -197,6 +197,7 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
image_gen: crate::config::ImageGenConfig::default(),
plugins: crate::config::PluginsConfig::default(),
locale: None,
verifiable_intent: crate::config::VerifiableIntentConfig::default(),
@@ -620,6 +621,7 @@ async fn run_quick_setup_with_home(
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
image_gen: crate::config::ImageGenConfig::default(),
plugins: crate::config::PluginsConfig::default(),
locale: None,
verifiable_intent: crate::config::VerifiableIntentConfig::default(),
+1
View File
@@ -412,6 +412,7 @@ mod tests {
));
let mut f = std::fs::File::create(&path).unwrap();
writeln!(f, "#!/bin/sh\ncat /dev/stdin").unwrap();
f.sync_all().unwrap();
drop(f);
#[cfg(unix)]
{
+10
View File
@@ -767,6 +767,12 @@ impl Provider for OpenAiCodexProvider {
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Mutex;
/// Mutex that serializes all tests which mutate process-global env vars
/// (`std::env::set_var` / `remove_var`). Each such test must hold this
/// lock for its entire duration so that parallel test threads don't race.
static ENV_MUTEX: Mutex<()> = Mutex::new(());
struct EnvGuard {
key: &'static str,
@@ -841,6 +847,7 @@ mod tests {
#[test]
fn resolve_responses_url_prefers_explicit_endpoint_env() {
let _lock = ENV_MUTEX.lock().unwrap_or_else(|e| e.into_inner());
let _endpoint_guard = EnvGuard::set(
CODEX_RESPONSES_URL_ENV,
Some("https://env.example.com/v1/responses"),
@@ -856,6 +863,7 @@ mod tests {
#[test]
fn resolve_responses_url_uses_provider_api_url_override() {
let _lock = ENV_MUTEX.lock().unwrap_or_else(|e| e.into_inner());
let _endpoint_guard = EnvGuard::set(CODEX_RESPONSES_URL_ENV, None);
let _base_guard = EnvGuard::set(CODEX_BASE_URL_ENV, None);
@@ -959,6 +967,7 @@ mod tests {
#[test]
fn resolve_reasoning_effort_prefers_configured_override() {
let _lock = ENV_MUTEX.lock().unwrap_or_else(|e| e.into_inner());
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("low"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", Some("high")),
@@ -968,6 +977,7 @@ mod tests {
#[test]
fn resolve_reasoning_effort_uses_legacy_env_when_unconfigured() {
let _lock = ENV_MUTEX.lock().unwrap_or_else(|e| e.into_inner());
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("minimal"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", None),
+1
View File
@@ -108,6 +108,7 @@ fn is_context_window_exceeded(err: &anyhow::Error) -> bool {
"token limit exceeded",
"prompt is too long",
"input is too long",
"prompt exceeds max length",
];
hints.iter().any(|hint| lower.contains(hint))
+17 -2
View File
@@ -97,7 +97,8 @@ pub struct SecurityPolicy {
/// Default allowed commands for Unix platforms.
#[cfg(not(target_os = "windows"))]
fn default_allowed_commands() -> Vec<String> {
vec![
#[allow(unused_mut)]
let mut cmds = vec![
"git".into(),
"npm".into(),
"cargo".into(),
@@ -111,7 +112,16 @@ fn default_allowed_commands() -> Vec<String> {
"head".into(),
"tail".into(),
"date".into(),
]
"df".into(),
"du".into(),
"uname".into(),
"uptime".into(),
"hostname".into(),
];
// `free` is Linux-only; it does not exist on macOS or other BSDs.
#[cfg(target_os = "linux")]
cmds.push("free".into());
cmds
}
/// Default allowed commands for Windows platforms.
@@ -142,6 +152,11 @@ fn default_allowed_commands() -> Vec<String> {
"wc".into(),
"head".into(),
"tail".into(),
"df".into(),
"du".into(),
"uname".into(),
"uptime".into(),
"hostname".into(),
]
}
+636
View File
@@ -0,0 +1,636 @@
//! Live Canvas (A2UI) tool — push rendered content to a web canvas in real time.
//!
//! The agent can render HTML/SVG/Markdown to a named canvas, snapshot its
//! current state, clear it, or evaluate a JavaScript expression in the canvas
//! context. Content is stored in a shared [`CanvasStore`] and broadcast to
//! connected WebSocket clients via per-canvas channels.
use super::traits::{Tool, ToolResult};
use async_trait::async_trait;
use parking_lot::RwLock;
use serde::{Deserialize, Serialize};
use serde_json::json;
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::broadcast;
/// Maximum content size per canvas frame (256 KB).
pub const MAX_CONTENT_SIZE: usize = 256 * 1024;
/// Maximum number of history frames kept per canvas.
const MAX_HISTORY_FRAMES: usize = 50;
/// Broadcast channel capacity per canvas.
const BROADCAST_CAPACITY: usize = 64;
/// Maximum number of concurrent canvases to prevent memory exhaustion.
const MAX_CANVAS_COUNT: usize = 100;
/// Allowed content types for canvas frames via the REST API.
pub const ALLOWED_CONTENT_TYPES: &[&str] = &["html", "svg", "markdown", "text"];
/// A single canvas frame (one render).
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CanvasFrame {
/// Unique frame identifier.
pub frame_id: String,
/// Content type: `html`, `svg`, `markdown`, or `text`.
pub content_type: String,
/// The rendered content.
pub content: String,
/// ISO-8601 timestamp of when the frame was created.
pub timestamp: String,
}
/// Per-canvas state: current content + history + broadcast sender.
struct CanvasEntry {
current: Option<CanvasFrame>,
history: Vec<CanvasFrame>,
tx: broadcast::Sender<CanvasFrame>,
}
/// Shared canvas store — holds all active canvases.
///
/// Thread-safe and cheaply cloneable (wraps `Arc`).
#[derive(Clone)]
pub struct CanvasStore {
inner: Arc<RwLock<HashMap<String, CanvasEntry>>>,
}
impl Default for CanvasStore {
fn default() -> Self {
Self::new()
}
}
impl CanvasStore {
pub fn new() -> Self {
Self {
inner: Arc::new(RwLock::new(HashMap::new())),
}
}
/// Push a new frame to a canvas. Creates the canvas if it does not exist.
/// Returns `None` if the maximum canvas count has been reached and this is a new canvas.
pub fn render(
&self,
canvas_id: &str,
content_type: &str,
content: &str,
) -> Option<CanvasFrame> {
let frame = CanvasFrame {
frame_id: uuid::Uuid::new_v4().to_string(),
content_type: content_type.to_string(),
content: content.to_string(),
timestamp: chrono::Utc::now().to_rfc3339(),
};
let mut store = self.inner.write();
// Enforce canvas count limit for new canvases.
if !store.contains_key(canvas_id) && store.len() >= MAX_CANVAS_COUNT {
return None;
}
let entry = store
.entry(canvas_id.to_string())
.or_insert_with(|| CanvasEntry {
current: None,
history: Vec::new(),
tx: broadcast::channel(BROADCAST_CAPACITY).0,
});
entry.current = Some(frame.clone());
entry.history.push(frame.clone());
if entry.history.len() > MAX_HISTORY_FRAMES {
let excess = entry.history.len() - MAX_HISTORY_FRAMES;
entry.history.drain(..excess);
}
// Best-effort broadcast — ignore errors (no receivers is fine).
let _ = entry.tx.send(frame.clone());
Some(frame)
}
/// Get the current (most recent) frame for a canvas.
pub fn snapshot(&self, canvas_id: &str) -> Option<CanvasFrame> {
let store = self.inner.read();
store.get(canvas_id).and_then(|entry| entry.current.clone())
}
/// Get the frame history for a canvas.
pub fn history(&self, canvas_id: &str) -> Vec<CanvasFrame> {
let store = self.inner.read();
store
.get(canvas_id)
.map(|entry| entry.history.clone())
.unwrap_or_default()
}
/// Clear a canvas (removes current content and history).
pub fn clear(&self, canvas_id: &str) -> bool {
let mut store = self.inner.write();
if let Some(entry) = store.get_mut(canvas_id) {
entry.current = None;
entry.history.clear();
// Send an empty frame to signal clear to subscribers.
let clear_frame = CanvasFrame {
frame_id: uuid::Uuid::new_v4().to_string(),
content_type: "clear".to_string(),
content: String::new(),
timestamp: chrono::Utc::now().to_rfc3339(),
};
let _ = entry.tx.send(clear_frame);
true
} else {
false
}
}
/// Subscribe to real-time updates for a canvas.
/// Creates the canvas entry if it does not exist (subject to canvas count limit).
/// Returns `None` if the canvas does not exist and the limit has been reached.
pub fn subscribe(&self, canvas_id: &str) -> Option<broadcast::Receiver<CanvasFrame>> {
let mut store = self.inner.write();
// Enforce canvas count limit for new entries.
if !store.contains_key(canvas_id) && store.len() >= MAX_CANVAS_COUNT {
return None;
}
let entry = store
.entry(canvas_id.to_string())
.or_insert_with(|| CanvasEntry {
current: None,
history: Vec::new(),
tx: broadcast::channel(BROADCAST_CAPACITY).0,
});
Some(entry.tx.subscribe())
}
/// List all canvas IDs that currently have content.
pub fn list(&self) -> Vec<String> {
let store = self.inner.read();
store.keys().cloned().collect()
}
}
/// `CanvasTool` — agent-callable tool for the Live Canvas (A2UI) system.
pub struct CanvasTool {
store: CanvasStore,
}
impl CanvasTool {
pub fn new(store: CanvasStore) -> Self {
Self { store }
}
}
#[async_trait]
impl Tool for CanvasTool {
fn name(&self) -> &str {
"canvas"
}
fn description(&self) -> &str {
"Push rendered content (HTML, SVG, Markdown) to a live web canvas that users can see \
in real-time. Actions: render (push content), snapshot (get current content), \
clear (reset canvas), eval (evaluate JS expression in canvas context). \
Each canvas is identified by a canvas_id string."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"action": {
"type": "string",
"description": "Action to perform on the canvas.",
"enum": ["render", "snapshot", "clear", "eval"]
},
"canvas_id": {
"type": "string",
"description": "Unique identifier for the canvas. Defaults to 'default'."
},
"content_type": {
"type": "string",
"description": "Content type for render action: html, svg, markdown, or text.",
"enum": ["html", "svg", "markdown", "text"]
},
"content": {
"type": "string",
"description": "Content to render (for render action)."
},
"expression": {
"type": "string",
"description": "JavaScript expression to evaluate (for eval action). \
The result is returned as text. Evaluated client-side in the canvas iframe."
}
},
"required": ["action"]
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
let action = match args.get("action").and_then(|v| v.as_str()) {
Some(a) => a,
None => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required parameter: action".to_string()),
});
}
};
let canvas_id = args
.get("canvas_id")
.and_then(|v| v.as_str())
.unwrap_or("default");
match action {
"render" => {
let content_type = args
.get("content_type")
.and_then(|v| v.as_str())
.unwrap_or("html");
let content = match args.get("content").and_then(|v| v.as_str()) {
Some(c) => c,
None => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Missing required parameter: content (for render action)"
.to_string(),
),
});
}
};
if content.len() > MAX_CONTENT_SIZE {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Content exceeds maximum size of {} bytes",
MAX_CONTENT_SIZE
)),
});
}
match self.store.render(canvas_id, content_type, content) {
Some(frame) => Ok(ToolResult {
success: true,
output: format!(
"Rendered {} content to canvas '{}' (frame: {})",
content_type, canvas_id, frame.frame_id
),
error: None,
}),
None => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Maximum canvas count ({}) reached. Clear unused canvases first.",
MAX_CANVAS_COUNT
)),
}),
}
}
"snapshot" => match self.store.snapshot(canvas_id) {
Some(frame) => Ok(ToolResult {
success: true,
output: serde_json::to_string_pretty(&frame)
.unwrap_or_else(|_| frame.content.clone()),
error: None,
}),
None => Ok(ToolResult {
success: true,
output: format!("Canvas '{}' is empty", canvas_id),
error: None,
}),
},
"clear" => {
let existed = self.store.clear(canvas_id);
Ok(ToolResult {
success: true,
output: if existed {
format!("Canvas '{}' cleared", canvas_id)
} else {
format!("Canvas '{}' was already empty", canvas_id)
},
error: None,
})
}
"eval" => {
// Eval is handled client-side. We store an eval request as a special frame
// that the web viewer interprets.
let expression = match args.get("expression").and_then(|v| v.as_str()) {
Some(e) => e,
None => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Missing required parameter: expression (for eval action)"
.to_string(),
),
});
}
};
// Push a special eval frame so connected clients know to evaluate it.
match self.store.render(canvas_id, "eval", expression) {
Some(frame) => Ok(ToolResult {
success: true,
output: format!(
"Eval request sent to canvas '{}' (frame: {}). \
Result will be available to connected viewers.",
canvas_id, frame.frame_id
),
error: None,
}),
None => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Maximum canvas count ({}) reached. Clear unused canvases first.",
MAX_CANVAS_COUNT
)),
}),
}
}
other => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Unknown action: '{}'. Valid actions: render, snapshot, clear, eval",
other
)),
}),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn canvas_store_render_and_snapshot() {
let store = CanvasStore::new();
let frame = store.render("test", "html", "<h1>Hello</h1>").unwrap();
assert_eq!(frame.content_type, "html");
assert_eq!(frame.content, "<h1>Hello</h1>");
let snapshot = store.snapshot("test").unwrap();
assert_eq!(snapshot.frame_id, frame.frame_id);
assert_eq!(snapshot.content, "<h1>Hello</h1>");
}
#[test]
fn canvas_store_snapshot_empty_returns_none() {
let store = CanvasStore::new();
assert!(store.snapshot("nonexistent").is_none());
}
#[test]
fn canvas_store_clear_removes_content() {
let store = CanvasStore::new();
store.render("test", "html", "<p>content</p>");
assert!(store.snapshot("test").is_some());
let cleared = store.clear("test");
assert!(cleared);
assert!(store.snapshot("test").is_none());
}
#[test]
fn canvas_store_clear_nonexistent_returns_false() {
let store = CanvasStore::new();
assert!(!store.clear("nonexistent"));
}
#[test]
fn canvas_store_history_tracks_frames() {
let store = CanvasStore::new();
store.render("test", "html", "frame1");
store.render("test", "html", "frame2");
store.render("test", "html", "frame3");
let history = store.history("test");
assert_eq!(history.len(), 3);
assert_eq!(history[0].content, "frame1");
assert_eq!(history[2].content, "frame3");
}
#[test]
fn canvas_store_history_limit_enforced() {
let store = CanvasStore::new();
for i in 0..60 {
store.render("test", "html", &format!("frame{i}"));
}
let history = store.history("test");
assert_eq!(history.len(), MAX_HISTORY_FRAMES);
// Oldest frames should have been dropped
assert_eq!(history[0].content, "frame10");
}
#[test]
fn canvas_store_list_returns_canvas_ids() {
let store = CanvasStore::new();
store.render("alpha", "html", "a");
store.render("beta", "svg", "b");
let mut ids = store.list();
ids.sort();
assert_eq!(ids, vec!["alpha", "beta"]);
}
#[test]
fn canvas_store_subscribe_receives_updates() {
let store = CanvasStore::new();
let mut rx = store.subscribe("test").unwrap();
store.render("test", "html", "<p>live</p>");
let frame = rx.try_recv().unwrap();
assert_eq!(frame.content, "<p>live</p>");
}
#[tokio::test]
async fn canvas_tool_render_action() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store.clone());
let result = tool
.execute(json!({
"action": "render",
"canvas_id": "test",
"content_type": "html",
"content": "<h1>Hello World</h1>"
}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("Rendered html content"));
let snapshot = store.snapshot("test").unwrap();
assert_eq!(snapshot.content, "<h1>Hello World</h1>");
}
#[tokio::test]
async fn canvas_tool_snapshot_action() {
let store = CanvasStore::new();
store.render("test", "html", "<p>snap</p>");
let tool = CanvasTool::new(store);
let result = tool
.execute(json!({"action": "snapshot", "canvas_id": "test"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("<p>snap</p>"));
}
#[tokio::test]
async fn canvas_tool_snapshot_empty() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let result = tool
.execute(json!({"action": "snapshot", "canvas_id": "empty"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("empty"));
}
#[tokio::test]
async fn canvas_tool_clear_action() {
let store = CanvasStore::new();
store.render("test", "html", "<p>clear me</p>");
let tool = CanvasTool::new(store.clone());
let result = tool
.execute(json!({"action": "clear", "canvas_id": "test"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("cleared"));
assert!(store.snapshot("test").is_none());
}
#[tokio::test]
async fn canvas_tool_eval_action() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store.clone());
let result = tool
.execute(json!({
"action": "eval",
"canvas_id": "test",
"expression": "document.title"
}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("Eval request sent"));
let snapshot = store.snapshot("test").unwrap();
assert_eq!(snapshot.content_type, "eval");
assert_eq!(snapshot.content, "document.title");
}
#[tokio::test]
async fn canvas_tool_unknown_action() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let result = tool.execute(json!({"action": "invalid"})).await.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("Unknown action"));
}
#[tokio::test]
async fn canvas_tool_missing_action() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let result = tool.execute(json!({})).await.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("action"));
}
#[tokio::test]
async fn canvas_tool_render_missing_content() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let result = tool
.execute(json!({"action": "render", "canvas_id": "test"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("content"));
}
#[tokio::test]
async fn canvas_tool_render_content_too_large() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let big_content = "x".repeat(MAX_CONTENT_SIZE + 1);
let result = tool
.execute(json!({
"action": "render",
"canvas_id": "test",
"content": big_content
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("maximum size"));
}
#[tokio::test]
async fn canvas_tool_default_canvas_id() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store.clone());
let result = tool
.execute(json!({
"action": "render",
"content_type": "html",
"content": "<p>default</p>"
}))
.await
.unwrap();
assert!(result.success);
assert!(store.snapshot("default").is_some());
}
#[test]
fn canvas_store_enforces_max_canvas_count() {
let store = CanvasStore::new();
// Create MAX_CANVAS_COUNT canvases
for i in 0..MAX_CANVAS_COUNT {
assert!(store
.render(&format!("canvas_{i}"), "html", "content")
.is_some());
}
// The next new canvas should be rejected
assert!(store.render("one_too_many", "html", "content").is_none());
// But rendering to an existing canvas should still work
assert!(store.render("canvas_0", "html", "updated").is_some());
}
#[tokio::test]
async fn canvas_tool_eval_missing_expression() {
let store = CanvasStore::new();
let tool = CanvasTool::new(store);
let result = tool
.execute(json!({"action": "eval", "canvas_id": "test"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("expression"));
}
}
+204
View File
@@ -0,0 +1,204 @@
use super::traits::{Tool, ToolResult};
use crate::memory::Memory;
use async_trait::async_trait;
use serde_json::json;
use std::fmt::Write;
use std::sync::Arc;
/// Search Discord message history stored in discord.db.
pub struct DiscordSearchTool {
discord_memory: Arc<dyn Memory>,
}
impl DiscordSearchTool {
pub fn new(discord_memory: Arc<dyn Memory>) -> Self {
Self { discord_memory }
}
}
#[async_trait]
impl Tool for DiscordSearchTool {
fn name(&self) -> &str {
"discord_search"
}
fn description(&self) -> &str {
"Search Discord message history. Returns messages matching a keyword query, optionally filtered by channel_id, author_id, or time range."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Keywords or phrase to search for in Discord messages (optional if since/until provided)"
},
"limit": {
"type": "integer",
"description": "Max results to return (default: 10)"
},
"channel_id": {
"type": "string",
"description": "Filter results to a specific Discord channel ID"
},
"since": {
"type": "string",
"description": "Filter messages at or after this time (RFC 3339, e.g. 2025-03-01T00:00:00Z)"
},
"until": {
"type": "string",
"description": "Filter messages at or before this time (RFC 3339)"
}
}
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
let query = args.get("query").and_then(|v| v.as_str()).unwrap_or("");
let channel_id = args.get("channel_id").and_then(|v| v.as_str());
let since = args.get("since").and_then(|v| v.as_str());
let until = args.get("until").and_then(|v| v.as_str());
if query.trim().is_empty() && since.is_none() && until.is_none() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Provide at least 'query' (keywords) or time range ('since'/'until')".into(),
),
});
}
if let Some(s) = since {
if chrono::DateTime::parse_from_rfc3339(s).is_err() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Invalid 'since' date: {s}. Expected RFC 3339, e.g. 2025-03-01T00:00:00Z"
)),
});
}
}
if let Some(u) = until {
if chrono::DateTime::parse_from_rfc3339(u).is_err() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Invalid 'until' date: {u}. Expected RFC 3339, e.g. 2025-03-01T00:00:00Z"
)),
});
}
}
if let (Some(s), Some(u)) = (since, until) {
if let (Ok(s_dt), Ok(u_dt)) = (
chrono::DateTime::parse_from_rfc3339(s),
chrono::DateTime::parse_from_rfc3339(u),
) {
if s_dt >= u_dt {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("'since' must be before 'until'".into()),
});
}
}
}
#[allow(clippy::cast_possible_truncation)]
let limit = args
.get("limit")
.and_then(serde_json::Value::as_u64)
.map_or(10, |v| v as usize);
match self
.discord_memory
.recall(query, limit, channel_id, since, until)
.await
{
Ok(entries) if entries.is_empty() => Ok(ToolResult {
success: true,
output: "No Discord messages found.".into(),
error: None,
}),
Ok(entries) => {
let mut output = format!("Found {} Discord messages:\n", entries.len());
for entry in &entries {
let score = entry
.score
.map_or_else(String::new, |s| format!(" [{s:.0}%]"));
let _ = writeln!(output, "- {}{score}", entry.content);
}
Ok(ToolResult {
success: true,
output,
error: None,
})
}
Err(e) => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!("Discord search failed: {e}")),
}),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::memory::{MemoryCategory, SqliteMemory};
use tempfile::TempDir;
fn seeded_discord_mem() -> (TempDir, Arc<dyn Memory>) {
let tmp = TempDir::new().unwrap();
let mem = SqliteMemory::new_named(tmp.path(), "discord").unwrap();
(tmp, Arc::new(mem))
}
#[tokio::test]
async fn search_empty() {
let (_tmp, mem) = seeded_discord_mem();
let tool = DiscordSearchTool::new(mem);
let result = tool.execute(json!({"query": "hello"})).await.unwrap();
assert!(result.success);
assert!(result.output.contains("No Discord messages found"));
}
#[tokio::test]
async fn search_finds_match() {
let (_tmp, mem) = seeded_discord_mem();
mem.store(
"discord_001",
"@user1 in #general at 2025-01-01T00:00:00Z: hello world",
MemoryCategory::Custom("discord".to_string()),
Some("general"),
)
.await
.unwrap();
let tool = DiscordSearchTool::new(mem);
let result = tool.execute(json!({"query": "hello"})).await.unwrap();
assert!(result.success);
assert!(result.output.contains("hello"));
}
#[tokio::test]
async fn search_requires_query_or_time() {
let (_tmp, mem) = seeded_discord_mem();
let tool = DiscordSearchTool::new(mem);
let result = tool.execute(json!({})).await.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("at least"));
}
#[test]
fn name_and_schema() {
let (_tmp, mem) = seeded_discord_mem();
let tool = DiscordSearchTool::new(mem);
assert_eq!(tool.name(), "discord_search");
assert!(tool.parameters_schema()["properties"]["query"].is_object());
}
}
+494
View File
@@ -0,0 +1,494 @@
use super::traits::{Tool, ToolResult};
use crate::security::policy::ToolOperation;
use crate::security::SecurityPolicy;
use anyhow::Context;
use async_trait::async_trait;
use serde_json::json;
use std::path::PathBuf;
use std::sync::Arc;
/// Standalone image generation tool using fal.ai (Flux / Nano Banana models).
///
/// Reads the API key from an environment variable (default: `FAL_API_KEY`),
/// calls the fal.ai synchronous endpoint, downloads the resulting image,
/// and saves it to `{workspace}/images/{filename}.png`.
pub struct ImageGenTool {
security: Arc<SecurityPolicy>,
workspace_dir: PathBuf,
default_model: String,
api_key_env: String,
}
impl ImageGenTool {
pub fn new(
security: Arc<SecurityPolicy>,
workspace_dir: PathBuf,
default_model: String,
api_key_env: String,
) -> Self {
Self {
security,
workspace_dir,
default_model,
api_key_env,
}
}
/// Build a reusable HTTP client with reasonable timeouts.
fn http_client() -> reqwest::Client {
reqwest::Client::builder()
.timeout(std::time::Duration::from_secs(120))
.build()
.unwrap_or_default()
}
/// Read an API key from the environment.
fn read_api_key(env_var: &str) -> Result<String, String> {
std::env::var(env_var)
.map(|v| v.trim().to_string())
.ok()
.filter(|v| !v.is_empty())
.ok_or_else(|| format!("Missing API key: set the {env_var} environment variable"))
}
/// Core generation logic: call fal.ai, download image, save to disk.
async fn generate(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
// ── Parse parameters ───────────────────────────────────────
let prompt = match args.get("prompt").and_then(|v| v.as_str()) {
Some(p) if !p.trim().is_empty() => p.trim().to_string(),
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required parameter: 'prompt'".into()),
});
}
};
let filename = args
.get("filename")
.and_then(|v| v.as_str())
.filter(|s| !s.trim().is_empty())
.unwrap_or("generated_image");
// Sanitize filename — strip path components to prevent traversal.
let safe_name = PathBuf::from(filename).file_name().map_or_else(
|| "generated_image".to_string(),
|n| n.to_string_lossy().to_string(),
);
let size = args
.get("size")
.and_then(|v| v.as_str())
.unwrap_or("square_hd");
// Validate size enum.
const VALID_SIZES: &[&str] = &[
"square_hd",
"landscape_4_3",
"portrait_4_3",
"landscape_16_9",
"portrait_16_9",
];
if !VALID_SIZES.contains(&size) {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Invalid size '{size}'. Valid values: {}",
VALID_SIZES.join(", ")
)),
});
}
let model = args
.get("model")
.and_then(|v| v.as_str())
.filter(|s| !s.trim().is_empty())
.unwrap_or(&self.default_model);
// Validate model identifier: must look like a fal.ai model path
// (e.g. "fal-ai/flux/schnell"). Reject values with "..", query
// strings, or fragments that could redirect the HTTP request.
if model.contains("..")
|| model.contains('?')
|| model.contains('#')
|| model.contains('\\')
|| model.starts_with('/')
{
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Invalid model identifier '{model}'. \
Must be a fal.ai model path (e.g. 'fal-ai/flux/schnell')."
)),
});
}
// ── Read API key ───────────────────────────────────────────
let api_key = match Self::read_api_key(&self.api_key_env) {
Ok(k) => k,
Err(msg) => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(msg),
});
}
};
// ── Call fal.ai ────────────────────────────────────────────
let client = Self::http_client();
let url = format!("https://fal.run/{model}");
let body = json!({
"prompt": prompt,
"image_size": size,
"num_images": 1
});
let resp = client
.post(&url)
.header("Authorization", format!("Key {api_key}"))
.header("Content-Type", "application/json")
.json(&body)
.send()
.await
.context("fal.ai request failed")?;
let status = resp.status();
if !status.is_success() {
let body_text = resp.text().await.unwrap_or_default();
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!("fal.ai API error ({status}): {body_text}")),
});
}
let resp_json: serde_json::Value = resp
.json()
.await
.context("Failed to parse fal.ai response as JSON")?;
let image_url = resp_json
.pointer("/images/0/url")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("No image URL in fal.ai response"))?;
// ── Download image ─────────────────────────────────────────
let img_resp = client
.get(image_url)
.send()
.await
.context("Failed to download generated image")?;
if !img_resp.status().is_success() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Failed to download image from {image_url} ({})",
img_resp.status()
)),
});
}
let bytes = img_resp
.bytes()
.await
.context("Failed to read image bytes")?;
// ── Save to disk ───────────────────────────────────────────
let images_dir = self.workspace_dir.join("images");
tokio::fs::create_dir_all(&images_dir)
.await
.context("Failed to create images directory")?;
let output_path = images_dir.join(format!("{safe_name}.png"));
tokio::fs::write(&output_path, &bytes)
.await
.context("Failed to write image file")?;
let size_kb = bytes.len() / 1024;
Ok(ToolResult {
success: true,
output: format!(
"Image generated successfully.\n\
File: {}\n\
Size: {} KB\n\
Model: {}\n\
Prompt: {}",
output_path.display(),
size_kb,
model,
prompt,
),
error: None,
})
}
}
#[async_trait]
impl Tool for ImageGenTool {
fn name(&self) -> &str {
"image_gen"
}
fn description(&self) -> &str {
"Generate an image from a text prompt using fal.ai (Flux models). \
Saves the result to the workspace images directory and returns the file path."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"required": ["prompt"],
"properties": {
"prompt": {
"type": "string",
"description": "Text prompt describing the image to generate."
},
"filename": {
"type": "string",
"description": "Output filename without extension (default: 'generated_image'). Saved as PNG in workspace/images/."
},
"size": {
"type": "string",
"enum": ["square_hd", "landscape_4_3", "portrait_4_3", "landscape_16_9", "portrait_16_9"],
"description": "Image aspect ratio / size preset (default: 'square_hd')."
},
"model": {
"type": "string",
"description": "fal.ai model identifier (default: 'fal-ai/flux/schnell')."
}
}
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
// Security: image generation is a side-effecting action (HTTP + file write).
if let Err(error) = self
.security
.enforce_tool_operation(ToolOperation::Act, "image_gen")
{
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error),
});
}
self.generate(args).await
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::security::{AutonomyLevel, SecurityPolicy};
fn test_security() -> Arc<SecurityPolicy> {
Arc::new(SecurityPolicy {
autonomy: AutonomyLevel::Full,
workspace_dir: std::env::temp_dir(),
..SecurityPolicy::default()
})
}
fn test_tool() -> ImageGenTool {
ImageGenTool::new(
test_security(),
std::env::temp_dir(),
"fal-ai/flux/schnell".into(),
"FAL_API_KEY".into(),
)
}
#[test]
fn tool_name() {
let tool = test_tool();
assert_eq!(tool.name(), "image_gen");
}
#[test]
fn tool_description_is_nonempty() {
let tool = test_tool();
assert!(!tool.description().is_empty());
assert!(tool.description().contains("image"));
}
#[test]
fn tool_schema_has_required_prompt() {
let tool = test_tool();
let schema = tool.parameters_schema();
assert_eq!(schema["required"], json!(["prompt"]));
assert!(schema["properties"]["prompt"].is_object());
}
#[test]
fn tool_schema_has_optional_params() {
let tool = test_tool();
let schema = tool.parameters_schema();
assert!(schema["properties"]["filename"].is_object());
assert!(schema["properties"]["size"].is_object());
assert!(schema["properties"]["model"].is_object());
}
#[test]
fn tool_spec_roundtrip() {
let tool = test_tool();
let spec = tool.spec();
assert_eq!(spec.name, "image_gen");
assert!(spec.parameters.is_object());
}
#[tokio::test]
async fn missing_prompt_returns_error() {
let tool = test_tool();
let result = tool.execute(json!({})).await.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("prompt"));
}
#[tokio::test]
async fn empty_prompt_returns_error() {
let tool = test_tool();
let result = tool.execute(json!({"prompt": " "})).await.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("prompt"));
}
#[tokio::test]
async fn missing_api_key_returns_error() {
// Temporarily ensure the env var is unset.
let original = std::env::var("FAL_API_KEY_TEST_IMAGE_GEN").ok();
std::env::remove_var("FAL_API_KEY_TEST_IMAGE_GEN");
let tool = ImageGenTool::new(
test_security(),
std::env::temp_dir(),
"fal-ai/flux/schnell".into(),
"FAL_API_KEY_TEST_IMAGE_GEN".into(),
);
let result = tool
.execute(json!({"prompt": "a sunset over the ocean"}))
.await
.unwrap();
assert!(!result.success);
assert!(result
.error
.as_deref()
.unwrap()
.contains("FAL_API_KEY_TEST_IMAGE_GEN"));
// Restore if it was set.
if let Some(val) = original {
std::env::set_var("FAL_API_KEY_TEST_IMAGE_GEN", val);
}
}
#[tokio::test]
async fn invalid_size_returns_error() {
// Set a dummy key so we get past the key check.
std::env::set_var("FAL_API_KEY_TEST_SIZE", "dummy_key");
let tool = ImageGenTool::new(
test_security(),
std::env::temp_dir(),
"fal-ai/flux/schnell".into(),
"FAL_API_KEY_TEST_SIZE".into(),
);
let result = tool
.execute(json!({"prompt": "test", "size": "invalid_size"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("Invalid size"));
std::env::remove_var("FAL_API_KEY_TEST_SIZE");
}
#[tokio::test]
async fn read_only_autonomy_blocks_execution() {
let security = Arc::new(SecurityPolicy {
autonomy: AutonomyLevel::ReadOnly,
workspace_dir: std::env::temp_dir(),
..SecurityPolicy::default()
});
let tool = ImageGenTool::new(
security,
std::env::temp_dir(),
"fal-ai/flux/schnell".into(),
"FAL_API_KEY".into(),
);
let result = tool.execute(json!({"prompt": "test image"})).await.unwrap();
assert!(!result.success);
let err = result.error.as_deref().unwrap();
assert!(
err.contains("read-only") || err.contains("image_gen"),
"expected read-only or image_gen in error, got: {err}"
);
}
#[tokio::test]
async fn invalid_model_with_traversal_returns_error() {
std::env::set_var("FAL_API_KEY_TEST_MODEL", "dummy_key");
let tool = ImageGenTool::new(
test_security(),
std::env::temp_dir(),
"fal-ai/flux/schnell".into(),
"FAL_API_KEY_TEST_MODEL".into(),
);
let result = tool
.execute(json!({"prompt": "test", "model": "../../evil-endpoint"}))
.await
.unwrap();
assert!(!result.success);
assert!(result
.error
.as_deref()
.unwrap()
.contains("Invalid model identifier"));
std::env::remove_var("FAL_API_KEY_TEST_MODEL");
}
#[test]
fn read_api_key_missing() {
let result = ImageGenTool::read_api_key("DEFINITELY_NOT_SET_ZC_TEST_12345");
assert!(result.is_err());
assert!(result
.unwrap_err()
.contains("DEFINITELY_NOT_SET_ZC_TEST_12345"));
}
#[test]
fn filename_traversal_is_sanitized() {
// Verify that path traversal in filenames is stripped to just the final component.
let sanitized = PathBuf::from("../../etc/passwd").file_name().map_or_else(
|| "generated_image".to_string(),
|n| n.to_string_lossy().to_string(),
);
assert_eq!(sanitized, "passwd");
// ".." alone has no file_name, falls back to default.
let sanitized = PathBuf::from("..").file_name().map_or_else(
|| "generated_image".to_string(),
|n| n.to_string_lossy().to_string(),
);
assert_eq!(sanitized, "generated_image");
}
#[test]
fn read_api_key_present() {
std::env::set_var("ZC_IMAGE_GEN_TEST_KEY", "test_value_123");
let result = ImageGenTool::read_api_key("ZC_IMAGE_GEN_TEST_KEY");
assert!(result.is_ok());
assert_eq!(result.unwrap(), "test_value_123");
std::env::remove_var("ZC_IMAGE_GEN_TEST_KEY");
}
}
+85 -10
View File
@@ -20,6 +20,7 @@ pub mod browser;
pub mod browser_delegate;
pub mod browser_open;
pub mod calculator;
pub mod canvas;
pub mod claude_code;
pub mod cli_discovery;
pub mod cloud_ops;
@@ -34,6 +35,7 @@ pub mod cron_runs;
pub mod cron_update;
pub mod data_management;
pub mod delegate;
pub mod discord_search;
pub mod file_edit;
pub mod file_read;
pub mod file_write;
@@ -47,6 +49,7 @@ pub mod hardware_memory_map;
#[cfg(feature = "hardware")]
pub mod hardware_memory_read;
pub mod http_request;
pub mod image_gen;
pub mod image_info;
pub mod jira_tool;
pub mod knowledge_tool;
@@ -69,12 +72,14 @@ pub mod pdf_read;
pub mod project_intel;
pub mod proxy_config;
pub mod pushover;
pub mod reaction;
pub mod read_skill;
pub mod report_templates;
pub mod schedule;
pub mod schema;
pub mod screenshot;
pub mod security_ops;
pub mod sessions;
pub mod shell;
pub mod swarm;
pub mod text_browser;
@@ -93,6 +98,7 @@ pub use browser::{BrowserTool, ComputerUseConfig};
pub use browser_delegate::{BrowserDelegateConfig, BrowserDelegateTool};
pub use browser_open::BrowserOpenTool;
pub use calculator::CalculatorTool;
pub use canvas::{CanvasStore, CanvasTool};
pub use claude_code::ClaudeCodeTool;
pub use cloud_ops::CloudOpsTool;
pub use cloud_patterns::CloudPatternsTool;
@@ -106,6 +112,7 @@ pub use cron_runs::CronRunsTool;
pub use cron_update::CronUpdateTool;
pub use data_management::DataManagementTool;
pub use delegate::DelegateTool;
pub use discord_search::DiscordSearchTool;
pub use file_edit::FileEditTool;
pub use file_read::FileReadTool;
pub use file_write::FileWriteTool;
@@ -119,6 +126,7 @@ pub use hardware_memory_map::HardwareMemoryMapTool;
#[cfg(feature = "hardware")]
pub use hardware_memory_read::HardwareMemoryReadTool;
pub use http_request::HttpRequestTool;
pub use image_gen::ImageGenTool;
pub use image_info::ImageInfoTool;
pub use jira_tool::JiraTool;
pub use knowledge_tool::KnowledgeTool;
@@ -139,12 +147,14 @@ pub use pdf_read::PdfReadTool;
pub use project_intel::ProjectIntelTool;
pub use proxy_config::ProxyConfigTool;
pub use pushover::PushoverTool;
pub use reaction::{ChannelMapHandle, ReactionTool};
pub use read_skill::ReadSkillTool;
pub use schedule::ScheduleTool;
#[allow(unused_imports)]
pub use schema::{CleaningStrategy, SchemaCleanr};
pub use screenshot::ScreenshotTool;
pub use security_ops::SecurityOpsTool;
pub use sessions::{SessionsHistoryTool, SessionsListTool, SessionsSendTool};
pub use shell::ShellTool;
pub use swarm::SwarmTool;
pub use text_browser::TextBrowserTool;
@@ -262,7 +272,12 @@ pub fn all_tools(
agents: &HashMap<String, DelegateAgentConfig>,
fallback_api_key: Option<&str>,
root_config: &crate::config::Config,
) -> (Vec<Box<dyn Tool>>, Option<DelegateParentToolsHandle>) {
canvas_store: Option<CanvasStore>,
) -> (
Vec<Box<dyn Tool>>,
Option<DelegateParentToolsHandle>,
Option<ChannelMapHandle>,
) {
all_tools_with_runtime(
config,
security,
@@ -277,6 +292,7 @@ pub fn all_tools(
agents,
fallback_api_key,
root_config,
canvas_store,
)
}
@@ -296,7 +312,12 @@ pub fn all_tools_with_runtime(
agents: &HashMap<String, DelegateAgentConfig>,
fallback_api_key: Option<&str>,
root_config: &crate::config::Config,
) -> (Vec<Box<dyn Tool>>, Option<DelegateParentToolsHandle>) {
canvas_store: Option<CanvasStore>,
) -> (
Vec<Box<dyn Tool>>,
Option<DelegateParentToolsHandle>,
Option<ChannelMapHandle>,
) {
let has_shell_access = runtime.has_shell_access();
let sandbox = create_sandbox(&root_config.security);
let mut tool_arcs: Vec<Arc<dyn Tool>> = vec![
@@ -336,8 +357,21 @@ pub fn all_tools_with_runtime(
)),
Arc::new(CalculatorTool::new()),
Arc::new(WeatherTool::new()),
Arc::new(CanvasTool::new(canvas_store.unwrap_or_default())),
];
// Register discord_search if discord_history channel is configured
if root_config.channels_config.discord_history.is_some() {
match crate::memory::SqliteMemory::new_named(workspace_dir, "discord") {
Ok(discord_mem) => {
tool_arcs.push(Arc::new(DiscordSearchTool::new(Arc::new(discord_mem))));
}
Err(e) => {
tracing::warn!("discord_search: failed to open discord.db: {e}");
}
}
}
if matches!(
root_config.skills.prompt_injection_mode,
crate::config::SkillsPromptInjectionMode::Compact
@@ -545,6 +579,18 @@ pub fn all_tools_with_runtime(
tool_arcs.push(Arc::new(ScreenshotTool::new(security.clone())));
tool_arcs.push(Arc::new(ImageInfoTool::new(security.clone())));
// Session-to-session messaging tools (always available when sessions dir exists)
if let Ok(session_store) = crate::channels::session_store::SessionStore::new(workspace_dir) {
let backend: Arc<dyn crate::channels::session_backend::SessionBackend> =
Arc::new(session_store);
tool_arcs.push(Arc::new(SessionsListTool::new(backend.clone())));
tool_arcs.push(Arc::new(SessionsHistoryTool::new(
backend.clone(),
security.clone(),
)));
tool_arcs.push(Arc::new(SessionsSendTool::new(backend, security.clone())));
}
// LinkedIn integration (config-gated)
if root_config.linkedin.enabled {
tool_arcs.push(Arc::new(LinkedInTool::new(
@@ -556,6 +602,16 @@ pub fn all_tools_with_runtime(
)));
}
// Standalone image generation tool (config-gated)
if root_config.image_gen.enabled {
tool_arcs.push(Arc::new(ImageGenTool::new(
security.clone(),
workspace_dir.to_path_buf(),
root_config.image_gen.default_model.clone(),
root_config.image_gen.api_key_env.clone(),
)));
}
if let Some(key) = composio_key {
if !key.is_empty() {
tool_arcs.push(Arc::new(ComposioTool::new(
@@ -566,6 +622,11 @@ pub fn all_tools_with_runtime(
}
}
// Emoji reaction tool — always registered; channel map populated later by start_channels.
let reaction_tool = ReactionTool::new(security.clone());
let reaction_handle = reaction_tool.channel_map_handle();
tool_arcs.push(Arc::new(reaction_tool));
// Microsoft 365 Graph API integration
if root_config.microsoft365.enabled {
let ms_cfg = &root_config.microsoft365;
@@ -592,7 +653,11 @@ pub fn all_tools_with_runtime(
tracing::error!(
"microsoft365: client_credentials auth_flow requires a non-empty client_secret"
);
return (boxed_registry_from_arcs(tool_arcs), None);
return (
boxed_registry_from_arcs(tool_arcs),
None,
Some(reaction_handle),
);
}
let resolved = microsoft365::types::Microsoft365ResolvedConfig {
@@ -776,7 +841,11 @@ pub fn all_tools_with_runtime(
}
}
(boxed_registry_from_arcs(tool_arcs), delegate_handle)
(
boxed_registry_from_arcs(tool_arcs),
delegate_handle,
Some(reaction_handle),
)
}
#[cfg(test)]
@@ -820,7 +889,7 @@ mod tests {
let http = crate::config::HttpRequestConfig::default();
let cfg = test_config(&tmp);
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(Config::default()),
&security,
mem,
@@ -833,6 +902,7 @@ mod tests {
&HashMap::new(),
None,
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(!names.contains(&"browser_open"));
@@ -862,7 +932,7 @@ mod tests {
let http = crate::config::HttpRequestConfig::default();
let cfg = test_config(&tmp);
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(Config::default()),
&security,
mem,
@@ -875,6 +945,7 @@ mod tests {
&HashMap::new(),
None,
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(names.contains(&"browser_open"));
@@ -1015,7 +1086,7 @@ mod tests {
},
);
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(Config::default()),
&security,
mem,
@@ -1028,6 +1099,7 @@ mod tests {
&agents,
Some("delegate-test-credential"),
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(names.contains(&"delegate"));
@@ -1048,7 +1120,7 @@ mod tests {
let http = crate::config::HttpRequestConfig::default();
let cfg = test_config(&tmp);
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(Config::default()),
&security,
mem,
@@ -1061,6 +1133,7 @@ mod tests {
&HashMap::new(),
None,
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(!names.contains(&"delegate"));
@@ -1082,7 +1155,7 @@ mod tests {
let mut cfg = test_config(&tmp);
cfg.skills.prompt_injection_mode = crate::config::SkillsPromptInjectionMode::Compact;
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(cfg.clone()),
&security,
mem,
@@ -1095,6 +1168,7 @@ mod tests {
&HashMap::new(),
None,
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(names.contains(&"read_skill"));
@@ -1116,7 +1190,7 @@ mod tests {
let mut cfg = test_config(&tmp);
cfg.skills.prompt_injection_mode = crate::config::SkillsPromptInjectionMode::Full;
let (tools, _) = all_tools(
let (tools, _, _) = all_tools(
Arc::new(cfg.clone()),
&security,
mem,
@@ -1129,6 +1203,7 @@ mod tests {
&HashMap::new(),
None,
&cfg,
None,
);
let names: Vec<&str> = tools.iter().map(|t| t.name()).collect();
assert!(!names.contains(&"read_skill"));
+546
View File
@@ -0,0 +1,546 @@
//! Emoji reaction tool for cross-channel message reactions.
//!
//! Exposes `add_reaction` and `remove_reaction` from the [`Channel`] trait as an
//! agent-callable tool. The tool holds a late-binding channel map handle that is
//! populated once channels are initialized (after tool construction). This mirrors
//! the pattern used by [`DelegateTool`] for its parent-tools handle.
use super::traits::{Tool, ToolResult};
use crate::channels::traits::Channel;
use crate::security::policy::ToolOperation;
use crate::security::SecurityPolicy;
use async_trait::async_trait;
use parking_lot::RwLock;
use serde_json::json;
use std::collections::HashMap;
use std::sync::Arc;
/// Shared handle to the channel map. Starts empty; populated once channels boot.
pub type ChannelMapHandle = Arc<RwLock<HashMap<String, Arc<dyn Channel>>>>;
/// Agent-callable tool for adding or removing emoji reactions on messages.
pub struct ReactionTool {
channels: ChannelMapHandle,
security: Arc<SecurityPolicy>,
}
impl ReactionTool {
/// Create a new reaction tool with an empty channel map.
/// Call [`populate`] or write to the returned [`ChannelMapHandle`] once channels
/// are available.
pub fn new(security: Arc<SecurityPolicy>) -> Self {
Self {
channels: Arc::new(RwLock::new(HashMap::new())),
security,
}
}
/// Return the shared handle so callers can populate it after channel init.
pub fn channel_map_handle(&self) -> ChannelMapHandle {
Arc::clone(&self.channels)
}
/// Convenience: populate the channel map from a pre-built map.
pub fn populate(&self, map: HashMap<String, Arc<dyn Channel>>) {
*self.channels.write() = map;
}
}
#[async_trait]
impl Tool for ReactionTool {
fn name(&self) -> &str {
"reaction"
}
fn description(&self) -> &str {
"Add or remove an emoji reaction on a message in any active channel. \
Provide the channel name (e.g. 'discord', 'slack'), the platform channel ID, \
the platform message ID, and the emoji (Unicode character or platform shortcode)."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"channel": {
"type": "string",
"description": "Name of the channel to react in (e.g. 'discord', 'slack', 'telegram')"
},
"channel_id": {
"type": "string",
"description": "Platform-specific channel/conversation identifier (e.g. Discord channel snowflake, Slack channel ID)"
},
"message_id": {
"type": "string",
"description": "Platform-scoped message identifier to react to"
},
"emoji": {
"type": "string",
"description": "Emoji to react with (Unicode character or platform shortcode)"
},
"action": {
"type": "string",
"enum": ["add", "remove"],
"description": "Whether to add or remove the reaction (default: 'add')"
}
},
"required": ["channel", "channel_id", "message_id", "emoji"]
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
// Security gate
if let Err(error) = self
.security
.enforce_tool_operation(ToolOperation::Act, "reaction")
{
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error),
});
}
let channel_name = args
.get("channel")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'channel' parameter"))?;
let channel_id = args
.get("channel_id")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'channel_id' parameter"))?;
let message_id = args
.get("message_id")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'message_id' parameter"))?;
let emoji = args
.get("emoji")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'emoji' parameter"))?;
let action = args.get("action").and_then(|v| v.as_str()).unwrap_or("add");
if action != "add" && action != "remove" {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Invalid action '{action}': must be 'add' or 'remove'"
)),
});
}
// Read-lock the channel map to find the target channel.
let channel = {
let map = self.channels.read();
if map.is_empty() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("No channels available yet (channels not initialized)".to_string()),
});
}
match map.get(channel_name) {
Some(ch) => Arc::clone(ch),
None => {
let available: Vec<String> = map.keys().cloned().collect();
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!(
"Channel '{channel_name}' not found. Available channels: {}",
available.join(", ")
)),
});
}
}
};
let result = if action == "add" {
channel.add_reaction(channel_id, message_id, emoji).await
} else {
channel.remove_reaction(channel_id, message_id, emoji).await
};
let past_tense = if action == "remove" {
"removed"
} else {
"added"
};
match result {
Ok(()) => Ok(ToolResult {
success: true,
output: format!(
"Reaction {past_tense}: {emoji} on message {message_id} in {channel_name}"
),
error: None,
}),
Err(e) => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!("Failed to {action} reaction: {e}")),
}),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::channels::traits::{ChannelMessage, SendMessage};
use std::sync::atomic::{AtomicBool, Ordering};
struct MockChannel {
reaction_added: AtomicBool,
reaction_removed: AtomicBool,
last_channel_id: parking_lot::Mutex<Option<String>>,
fail_on_add: bool,
}
impl MockChannel {
fn new() -> Self {
Self {
reaction_added: AtomicBool::new(false),
reaction_removed: AtomicBool::new(false),
last_channel_id: parking_lot::Mutex::new(None),
fail_on_add: false,
}
}
fn failing() -> Self {
Self {
reaction_added: AtomicBool::new(false),
reaction_removed: AtomicBool::new(false),
last_channel_id: parking_lot::Mutex::new(None),
fail_on_add: true,
}
}
}
#[async_trait]
impl Channel for MockChannel {
fn name(&self) -> &str {
"mock"
}
async fn send(&self, _message: &SendMessage) -> anyhow::Result<()> {
Ok(())
}
async fn listen(
&self,
_tx: tokio::sync::mpsc::Sender<ChannelMessage>,
) -> anyhow::Result<()> {
Ok(())
}
async fn add_reaction(
&self,
channel_id: &str,
_message_id: &str,
_emoji: &str,
) -> anyhow::Result<()> {
if self.fail_on_add {
return Err(anyhow::anyhow!("API error: rate limited"));
}
*self.last_channel_id.lock() = Some(channel_id.to_string());
self.reaction_added.store(true, Ordering::SeqCst);
Ok(())
}
async fn remove_reaction(
&self,
channel_id: &str,
_message_id: &str,
_emoji: &str,
) -> anyhow::Result<()> {
*self.last_channel_id.lock() = Some(channel_id.to_string());
self.reaction_removed.store(true, Ordering::SeqCst);
Ok(())
}
}
fn make_tool_with_channels(channels: Vec<(&str, Arc<dyn Channel>)>) -> ReactionTool {
let tool = ReactionTool::new(Arc::new(SecurityPolicy::default()));
let map: HashMap<String, Arc<dyn Channel>> = channels
.into_iter()
.map(|(name, ch)| (name.to_string(), ch))
.collect();
tool.populate(map);
tool
}
#[test]
fn tool_metadata() {
let tool = ReactionTool::new(Arc::new(SecurityPolicy::default()));
assert_eq!(tool.name(), "reaction");
assert!(!tool.description().is_empty());
let schema = tool.parameters_schema();
assert_eq!(schema["type"], "object");
assert!(schema["properties"]["channel"].is_object());
assert!(schema["properties"]["channel_id"].is_object());
assert!(schema["properties"]["message_id"].is_object());
assert!(schema["properties"]["emoji"].is_object());
assert!(schema["properties"]["action"].is_object());
let required = schema["required"].as_array().unwrap();
assert!(required.iter().any(|v| v == "channel"));
assert!(required.iter().any(|v| v == "channel_id"));
assert!(required.iter().any(|v| v == "message_id"));
assert!(required.iter().any(|v| v == "emoji"));
// action is optional (defaults to "add")
assert!(!required.iter().any(|v| v == "action"));
}
#[tokio::test]
async fn add_reaction_success() {
let mock: Arc<dyn Channel> = Arc::new(MockChannel::new());
let tool = make_tool_with_channels(vec![("discord", Arc::clone(&mock))]);
let result = tool
.execute(json!({
"channel": "discord",
"channel_id": "ch_001",
"message_id": "msg_123",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("added"));
assert!(result.error.is_none());
}
#[tokio::test]
async fn remove_reaction_success() {
let mock: Arc<dyn Channel> = Arc::new(MockChannel::new());
let tool = make_tool_with_channels(vec![("slack", Arc::clone(&mock))]);
let result = tool
.execute(json!({
"channel": "slack",
"channel_id": "C0123SLACK",
"message_id": "msg_456",
"emoji": "\u{1F440}",
"action": "remove"
}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("removed"));
}
#[tokio::test]
async fn unknown_channel_returns_error() {
let tool = make_tool_with_channels(vec![(
"discord",
Arc::new(MockChannel::new()) as Arc<dyn Channel>,
)]);
let result = tool
.execute(json!({
"channel": "nonexistent",
"channel_id": "ch_x",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(!result.success);
let err = result.error.as_deref().unwrap();
assert!(err.contains("not found"));
assert!(err.contains("discord"));
}
#[tokio::test]
async fn invalid_action_returns_error() {
let tool = make_tool_with_channels(vec![(
"discord",
Arc::new(MockChannel::new()) as Arc<dyn Channel>,
)]);
let result = tool
.execute(json!({
"channel": "discord",
"channel_id": "ch_001",
"message_id": "msg_1",
"emoji": "\u{2705}",
"action": "toggle"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("toggle"));
}
#[tokio::test]
async fn channel_error_propagated() {
let mock: Arc<dyn Channel> = Arc::new(MockChannel::failing());
let tool = make_tool_with_channels(vec![("discord", mock)]);
let result = tool
.execute(json!({
"channel": "discord",
"channel_id": "ch_001",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("rate limited"));
}
#[tokio::test]
async fn missing_required_params() {
let tool = make_tool_with_channels(vec![(
"test",
Arc::new(MockChannel::new()) as Arc<dyn Channel>,
)]);
// Missing channel
let result = tool
.execute(json!({"channel_id": "c1", "message_id": "1", "emoji": "x"}))
.await;
assert!(result.is_err());
// Missing channel_id
let result = tool
.execute(json!({"channel": "test", "message_id": "1", "emoji": "x"}))
.await;
assert!(result.is_err());
// Missing message_id
let result = tool
.execute(json!({"channel": "a", "channel_id": "c1", "emoji": "x"}))
.await;
assert!(result.is_err());
// Missing emoji
let result = tool
.execute(json!({"channel": "a", "channel_id": "c1", "message_id": "1"}))
.await;
assert!(result.is_err());
}
#[tokio::test]
async fn empty_channels_returns_not_initialized() {
let tool = ReactionTool::new(Arc::new(SecurityPolicy::default()));
// No channels populated
let result = tool
.execute(json!({
"channel": "discord",
"channel_id": "ch_001",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_deref().unwrap().contains("not initialized"));
}
#[tokio::test]
async fn default_action_is_add() {
let mock = Arc::new(MockChannel::new());
let mock_ch: Arc<dyn Channel> = Arc::clone(&mock) as Arc<dyn Channel>;
let tool = make_tool_with_channels(vec![("test", mock_ch)]);
let result = tool
.execute(json!({
"channel": "test",
"channel_id": "ch_test",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(result.success);
assert!(mock.reaction_added.load(Ordering::SeqCst));
assert!(!mock.reaction_removed.load(Ordering::SeqCst));
}
#[tokio::test]
async fn channel_id_passed_to_trait_not_channel_name() {
let mock = Arc::new(MockChannel::new());
let mock_ch: Arc<dyn Channel> = Arc::clone(&mock) as Arc<dyn Channel>;
let tool = make_tool_with_channels(vec![("discord", mock_ch)]);
let result = tool
.execute(json!({
"channel": "discord",
"channel_id": "123456789",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(result.success);
// The trait must receive the platform channel_id, not the channel name
assert_eq!(
mock.last_channel_id.lock().as_deref(),
Some("123456789"),
"add_reaction must receive channel_id, not channel name"
);
}
#[tokio::test]
async fn channel_map_handle_allows_late_binding() {
let tool = ReactionTool::new(Arc::new(SecurityPolicy::default()));
let handle = tool.channel_map_handle();
// Initially empty — tool reports not initialized
let result = tool
.execute(json!({
"channel": "slack",
"channel_id": "C0123",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(!result.success);
// Populate via the handle
{
let mut map = handle.write();
map.insert(
"slack".to_string(),
Arc::new(MockChannel::new()) as Arc<dyn Channel>,
);
}
// Now the tool can route to the channel
let result = tool
.execute(json!({
"channel": "slack",
"channel_id": "C0123",
"message_id": "msg_1",
"emoji": "\u{2705}"
}))
.await
.unwrap();
assert!(result.success);
}
#[test]
fn spec_matches_metadata() {
let tool = ReactionTool::new(Arc::new(SecurityPolicy::default()));
let spec = tool.spec();
assert_eq!(spec.name, "reaction");
assert_eq!(spec.description, tool.description());
assert!(spec.parameters["required"].is_array());
}
}
+573
View File
@@ -0,0 +1,573 @@
//! Session-to-session messaging tools for inter-agent communication.
//!
//! Provides three tools:
//! - `sessions_list` — list active sessions with metadata
//! - `sessions_history` — read message history from a specific session
//! - `sessions_send` — send a message to a specific session
use super::traits::{Tool, ToolResult};
use crate::channels::session_backend::SessionBackend;
use crate::security::policy::ToolOperation;
use crate::security::SecurityPolicy;
use async_trait::async_trait;
use serde_json::json;
use std::fmt::Write;
use std::sync::Arc;
/// Validate that a session ID is non-empty and contains at least one
/// alphanumeric character (prevents blank keys after sanitization).
fn validate_session_id(session_id: &str) -> Result<(), ToolResult> {
let trimmed = session_id.trim();
if trimmed.is_empty() || !trimmed.chars().any(|c| c.is_alphanumeric()) {
return Err(ToolResult {
success: false,
output: String::new(),
error: Some(
"Invalid 'session_id': must be non-empty and contain at least one alphanumeric character.".into(),
),
});
}
Ok(())
}
// ── SessionsListTool ────────────────────────────────────────────────
/// Lists active sessions with their channel, last activity time, and message count.
pub struct SessionsListTool {
backend: Arc<dyn SessionBackend>,
}
impl SessionsListTool {
pub fn new(backend: Arc<dyn SessionBackend>) -> Self {
Self { backend }
}
}
#[async_trait]
impl Tool for SessionsListTool {
fn name(&self) -> &str {
"sessions_list"
}
fn description(&self) -> &str {
"List all active conversation sessions with their channel, last activity time, and message count."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"limit": {
"type": "integer",
"description": "Max sessions to return (default: 50)"
}
}
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
#[allow(clippy::cast_possible_truncation)]
let limit = args
.get("limit")
.and_then(serde_json::Value::as_u64)
.map_or(50, |v| v as usize);
let metadata = self.backend.list_sessions_with_metadata();
if metadata.is_empty() {
return Ok(ToolResult {
success: true,
output: "No active sessions found.".into(),
error: None,
});
}
let capped: Vec<_> = metadata.into_iter().take(limit).collect();
let mut output = format!("Found {} session(s):\n", capped.len());
for meta in &capped {
// Extract channel from key (convention: channel__identifier)
let channel = meta.key.split("__").next().unwrap_or(&meta.key);
let _ = writeln!(
output,
"- {}: channel={}, messages={}, last_activity={}",
meta.key, channel, meta.message_count, meta.last_activity
);
}
Ok(ToolResult {
success: true,
output,
error: None,
})
}
}
// ── SessionsHistoryTool ─────────────────────────────────────────────
/// Reads the message history of a specific session by ID.
pub struct SessionsHistoryTool {
backend: Arc<dyn SessionBackend>,
security: Arc<SecurityPolicy>,
}
impl SessionsHistoryTool {
pub fn new(backend: Arc<dyn SessionBackend>, security: Arc<SecurityPolicy>) -> Self {
Self { backend, security }
}
}
#[async_trait]
impl Tool for SessionsHistoryTool {
fn name(&self) -> &str {
"sessions_history"
}
fn description(&self) -> &str {
"Read the message history of a specific session by its session ID. Returns the last N messages."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"session_id": {
"type": "string",
"description": "The session ID to read history from (e.g. telegram__user123)"
},
"limit": {
"type": "integer",
"description": "Max messages to return, from most recent (default: 20)"
}
},
"required": ["session_id"]
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
if let Err(error) = self
.security
.enforce_tool_operation(ToolOperation::Read, "sessions_history")
{
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error),
});
}
let session_id = args
.get("session_id")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'session_id' parameter"))?;
if let Err(result) = validate_session_id(session_id) {
return Ok(result);
}
#[allow(clippy::cast_possible_truncation)]
let limit = args
.get("limit")
.and_then(serde_json::Value::as_u64)
.map_or(20, |v| v as usize);
let messages = self.backend.load(session_id);
if messages.is_empty() {
return Ok(ToolResult {
success: true,
output: format!("No messages found for session '{session_id}'."),
error: None,
});
}
// Take the last `limit` messages
let start = messages.len().saturating_sub(limit);
let tail = &messages[start..];
let mut output = format!(
"Session '{}': showing {}/{} messages\n",
session_id,
tail.len(),
messages.len()
);
for msg in tail {
let _ = writeln!(output, "[{}] {}", msg.role, msg.content);
}
Ok(ToolResult {
success: true,
output,
error: None,
})
}
}
// ── SessionsSendTool ────────────────────────────────────────────────
/// Sends a message to a specific session, enabling inter-agent communication.
pub struct SessionsSendTool {
backend: Arc<dyn SessionBackend>,
security: Arc<SecurityPolicy>,
}
impl SessionsSendTool {
pub fn new(backend: Arc<dyn SessionBackend>, security: Arc<SecurityPolicy>) -> Self {
Self { backend, security }
}
}
#[async_trait]
impl Tool for SessionsSendTool {
fn name(&self) -> &str {
"sessions_send"
}
fn description(&self) -> &str {
"Send a message to a specific session by its session ID. The message is appended to the session's conversation history as a 'user' message, enabling inter-agent communication."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"session_id": {
"type": "string",
"description": "The target session ID (e.g. telegram__user123)"
},
"message": {
"type": "string",
"description": "The message content to send"
}
},
"required": ["session_id", "message"]
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
if let Err(error) = self
.security
.enforce_tool_operation(ToolOperation::Act, "sessions_send")
{
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error),
});
}
let session_id = args
.get("session_id")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'session_id' parameter"))?;
if let Err(result) = validate_session_id(session_id) {
return Ok(result);
}
let message = args
.get("message")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing 'message' parameter"))?;
if message.trim().is_empty() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Message content must not be empty.".into()),
});
}
let chat_msg = crate::providers::traits::ChatMessage::user(message);
match self.backend.append(session_id, &chat_msg) {
Ok(()) => Ok(ToolResult {
success: true,
output: format!("Message sent to session '{session_id}'."),
error: None,
}),
Err(e) => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!("Failed to send message: {e}")),
}),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::channels::session_store::SessionStore;
use crate::providers::traits::ChatMessage;
use tempfile::TempDir;
fn test_security() -> Arc<SecurityPolicy> {
Arc::new(SecurityPolicy::default())
}
fn test_backend() -> (TempDir, Arc<dyn SessionBackend>) {
let tmp = TempDir::new().unwrap();
let store = SessionStore::new(tmp.path()).unwrap();
(tmp, Arc::new(store))
}
fn seeded_backend() -> (TempDir, Arc<dyn SessionBackend>) {
let tmp = TempDir::new().unwrap();
let store = SessionStore::new(tmp.path()).unwrap();
store
.append("telegram__alice", &ChatMessage::user("Hello from Alice"))
.unwrap();
store
.append(
"telegram__alice",
&ChatMessage::assistant("Hi Alice, how can I help?"),
)
.unwrap();
store
.append("discord__bob", &ChatMessage::user("Hey from Bob"))
.unwrap();
(tmp, Arc::new(store))
}
// ── SessionsListTool tests ──────────────────────────────────────
#[tokio::test]
async fn list_empty_sessions() {
let (_tmp, backend) = test_backend();
let tool = SessionsListTool::new(backend);
let result = tool.execute(json!({})).await.unwrap();
assert!(result.success);
assert!(result.output.contains("No active sessions"));
}
#[tokio::test]
async fn list_sessions_shows_all() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsListTool::new(backend);
let result = tool.execute(json!({})).await.unwrap();
assert!(result.success);
assert!(result.output.contains("2 session(s)"));
assert!(result.output.contains("telegram__alice"));
assert!(result.output.contains("discord__bob"));
}
#[tokio::test]
async fn list_sessions_respects_limit() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsListTool::new(backend);
let result = tool.execute(json!({"limit": 1})).await.unwrap();
assert!(result.success);
assert!(result.output.contains("1 session(s)"));
}
#[tokio::test]
async fn list_sessions_extracts_channel() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsListTool::new(backend);
let result = tool.execute(json!({})).await.unwrap();
assert!(result.output.contains("channel=telegram"));
assert!(result.output.contains("channel=discord"));
}
#[test]
fn list_tool_name_and_schema() {
let (_tmp, backend) = test_backend();
let tool = SessionsListTool::new(backend);
assert_eq!(tool.name(), "sessions_list");
assert!(tool.parameters_schema()["properties"]["limit"].is_object());
}
// ── SessionsHistoryTool tests ───────────────────────────────────
#[tokio::test]
async fn history_empty_session() {
let (_tmp, backend) = test_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
let result = tool
.execute(json!({"session_id": "nonexistent"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("No messages found"));
}
#[tokio::test]
async fn history_returns_messages() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
let result = tool
.execute(json!({"session_id": "telegram__alice"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("showing 2/2 messages"));
assert!(result.output.contains("[user] Hello from Alice"));
assert!(result.output.contains("[assistant] Hi Alice"));
}
#[tokio::test]
async fn history_respects_limit() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
let result = tool
.execute(json!({"session_id": "telegram__alice", "limit": 1}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("showing 1/2 messages"));
// Should show only the last message
assert!(result.output.contains("[assistant]"));
assert!(!result.output.contains("[user] Hello from Alice"));
}
#[tokio::test]
async fn history_missing_session_id() {
let (_tmp, backend) = test_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
let result = tool.execute(json!({})).await;
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("session_id"));
}
#[tokio::test]
async fn history_rejects_empty_session_id() {
let (_tmp, backend) = test_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
let result = tool.execute(json!({"session_id": " "})).await.unwrap();
assert!(!result.success);
assert!(result.error.unwrap().contains("Invalid"));
}
#[test]
fn history_tool_name_and_schema() {
let (_tmp, backend) = test_backend();
let tool = SessionsHistoryTool::new(backend, test_security());
assert_eq!(tool.name(), "sessions_history");
let schema = tool.parameters_schema();
assert!(schema["properties"]["session_id"].is_object());
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&json!("session_id")));
}
// ── SessionsSendTool tests ──────────────────────────────────────
#[tokio::test]
async fn send_appends_message() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend.clone(), test_security());
let result = tool
.execute(json!({
"session_id": "telegram__alice",
"message": "Hello from another agent"
}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("Message sent"));
// Verify message was appended
let messages = backend.load("telegram__alice");
assert_eq!(messages.len(), 1);
assert_eq!(messages[0].role, "user");
assert_eq!(messages[0].content, "Hello from another agent");
}
#[tokio::test]
async fn send_to_existing_session() {
let (_tmp, backend) = seeded_backend();
let tool = SessionsSendTool::new(backend.clone(), test_security());
let result = tool
.execute(json!({
"session_id": "telegram__alice",
"message": "Inter-agent message"
}))
.await
.unwrap();
assert!(result.success);
let messages = backend.load("telegram__alice");
assert_eq!(messages.len(), 3);
assert_eq!(messages[2].content, "Inter-agent message");
}
#[tokio::test]
async fn send_rejects_empty_message() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
let result = tool
.execute(json!({
"session_id": "telegram__alice",
"message": " "
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.unwrap().contains("empty"));
}
#[tokio::test]
async fn send_rejects_empty_session_id() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
let result = tool
.execute(json!({
"session_id": "",
"message": "hello"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.unwrap().contains("Invalid"));
}
#[tokio::test]
async fn send_rejects_non_alphanumeric_session_id() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
let result = tool
.execute(json!({
"session_id": "///",
"message": "hello"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.unwrap().contains("Invalid"));
}
#[tokio::test]
async fn send_missing_session_id() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
let result = tool.execute(json!({"message": "hi"})).await;
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("session_id"));
}
#[tokio::test]
async fn send_missing_message() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
let result = tool.execute(json!({"session_id": "telegram__alice"})).await;
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("message"));
}
#[test]
fn send_tool_name_and_schema() {
let (_tmp, backend) = test_backend();
let tool = SessionsSendTool::new(backend, test_security());
assert_eq!(tool.name(), "sessions_send");
let schema = tool.parameters_schema();
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&json!("session_id")));
assert!(schema["required"]
.as_array()
.unwrap()
.contains(&json!("message")));
}
}
+1
View File
@@ -34,6 +34,7 @@ image_info = "Read image file metadata (format, dimensions, size) and optionally
jira = "Interact with Jira: get tickets with configurable detail level, search issues with JQL, and add comments with mention and formatting support."
knowledge = "Manage a knowledge graph of architecture decisions, solution patterns, lessons learned, and experts. Actions: capture, search, relate, suggest, expert_find, lessons_extract, graph_stats."
linkedin = "Manage LinkedIn: create posts, list your posts, comment, react, delete posts, view engagement, get profile info, and read the configured content strategy. Requires LINKEDIN_* credentials in .env file."
discord_search = "Search Discord message history stored in discord.db. Use to find past messages, summarize channel activity, or look up what users said. Supports keyword search and optional filters: channel_id, since, until."
memory_forget = "Remove a memory by key. Use to delete outdated facts or sensitive data. Returns whether the memory was found and removed."
memory_recall = "Search long-term memory for relevant facts, preferences, or context. Returns scored results ranked by relevance."
memory_store = "Store a fact, preference, or note in long-term memory. Use category 'core' for permanent facts, 'daily' for session notes, 'conversation' for chat context, or a custom category name."
+2
View File
@@ -13,6 +13,7 @@ import Cost from './pages/Cost';
import Logs from './pages/Logs';
import Doctor from './pages/Doctor';
import Pairing from './pages/Pairing';
import Canvas from './pages/Canvas';
import { AuthProvider, useAuth } from './hooks/useAuth';
import { DraftContext, useDraftStore } from './hooks/useDraft';
import { setLocale, type Locale } from './lib/i18n';
@@ -234,6 +235,7 @@ function AppContent() {
<Route path="/logs" element={<Logs />} />
<Route path="/doctor" element={<Doctor />} />
<Route path="/pairing" element={<Pairing />} />
<Route path="/canvas" element={<Canvas />} />
<Route path="*" element={<Navigate to="/" replace />} />
</Route>
</Routes>
+2
View File
@@ -11,6 +11,7 @@ import {
DollarSign,
Activity,
Stethoscope,
Monitor,
} from 'lucide-react';
import { t } from '@/lib/i18n';
@@ -25,6 +26,7 @@ const navItems = [
{ to: '/cost', icon: DollarSign, labelKey: 'nav.cost' },
{ to: '/logs', icon: Activity, labelKey: 'nav.logs' },
{ to: '/doctor', icon: Stethoscope, labelKey: 'nav.doctor' },
{ to: '/canvas', icon: Monitor, labelKey: 'nav.canvas' },
];
export default function Sidebar() {
+3
View File
@@ -20,6 +20,7 @@ const translations: Record<Locale, Record<string, string>> = {
'nav.cost': '成本追踪',
'nav.logs': '日志',
'nav.doctor': '诊断',
'nav.canvas': '画布',
// Dashboard
'dashboard.title': '仪表盘',
@@ -350,6 +351,7 @@ const translations: Record<Locale, Record<string, string>> = {
'nav.cost': 'Cost Tracker',
'nav.logs': 'Logs',
'nav.doctor': 'Doctor',
'nav.canvas': 'Canvas',
// Dashboard
'dashboard.title': 'Dashboard',
@@ -680,6 +682,7 @@ const translations: Record<Locale, Record<string, string>> = {
'nav.cost': 'Maliyet Takibi',
'nav.logs': 'Kayıtlar',
'nav.doctor': 'Doktor',
'nav.canvas': 'Tuval',
// Dashboard
'dashboard.title': 'Kontrol Paneli',
+355
View File
@@ -0,0 +1,355 @@
import { useState, useEffect, useRef, useCallback } from 'react';
import { Monitor, Trash2, History, RefreshCw } from 'lucide-react';
import { apiFetch } from '@/lib/api';
import { basePath } from '@/lib/basePath';
import { getToken } from '@/lib/auth';
interface CanvasFrame {
frame_id: string;
content_type: string;
content: string;
timestamp: string;
}
interface WsCanvasMessage {
type: string;
canvas_id: string;
frame?: CanvasFrame;
}
export default function Canvas() {
const [canvasId, setCanvasId] = useState('default');
const [canvasIdInput, setCanvasIdInput] = useState('default');
const [currentFrame, setCurrentFrame] = useState<CanvasFrame | null>(null);
const [history, setHistory] = useState<CanvasFrame[]>([]);
const [connected, setConnected] = useState(false);
const [showHistory, setShowHistory] = useState(false);
const [canvasList, setCanvasList] = useState<string[]>([]);
const wsRef = useRef<WebSocket | null>(null);
const iframeRef = useRef<HTMLIFrameElement>(null);
// Build WebSocket URL for canvas
const getWsUrl = useCallback((id: string) => {
const proto = location.protocol === 'https:' ? 'wss:' : 'ws:';
const base = basePath || '';
return `${proto}//${location.host}${base}/ws/canvas/${encodeURIComponent(id)}`;
}, []);
// Connect to canvas WebSocket
const connectWs = useCallback((id: string) => {
if (wsRef.current) {
wsRef.current.close();
}
const token = getToken();
const protocols = token ? ['zeroclaw.v1', `bearer.${token}`] : ['zeroclaw.v1'];
const ws = new WebSocket(getWsUrl(id), protocols);
ws.onopen = () => setConnected(true);
ws.onclose = () => setConnected(false);
ws.onerror = () => setConnected(false);
ws.onmessage = (event) => {
try {
const msg: WsCanvasMessage = JSON.parse(event.data);
if (msg.type === 'frame' && msg.frame) {
if (msg.frame.content_type === 'clear') {
setCurrentFrame(null);
setHistory([]);
} else {
setCurrentFrame(msg.frame);
setHistory((prev) => [...prev.slice(-49), msg.frame!]);
}
}
} catch {
// ignore parse errors
}
};
wsRef.current = ws;
}, [getWsUrl]);
// Connect on mount and when canvasId changes
useEffect(() => {
connectWs(canvasId);
return () => {
wsRef.current?.close();
};
}, [canvasId, connectWs]);
// Fetch canvas list periodically
useEffect(() => {
const fetchList = async () => {
try {
const data = await apiFetch<{ canvases: string[] }>('/api/canvas');
setCanvasList(data.canvases || []);
} catch {
// ignore
}
};
fetchList();
const interval = setInterval(fetchList, 5000);
return () => clearInterval(interval);
}, []);
// Render content into the iframe
useEffect(() => {
if (!iframeRef.current || !currentFrame) return;
if (currentFrame.content_type === 'eval') return; // eval frames are special
const iframe = iframeRef.current;
const doc = iframe.contentDocument;
if (!doc) return;
let html = currentFrame.content;
if (currentFrame.content_type === 'svg') {
html = `<!DOCTYPE html><html><head><style>body{margin:0;display:flex;align-items:center;justify-content:center;min-height:100vh;background:#1a1a2e;}</style></head><body>${currentFrame.content}</body></html>`;
} else if (currentFrame.content_type === 'markdown') {
// Simple markdown-to-HTML: render as preformatted text with basic styling
const escaped = currentFrame.content
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;');
html = `<!DOCTYPE html><html><head><style>body{margin:1rem;font-family:system-ui,sans-serif;color:#e0e0e0;background:#1a1a2e;line-height:1.6;}pre{white-space:pre-wrap;word-wrap:break-word;}</style></head><body><pre>${escaped}</pre></body></html>`;
} else if (currentFrame.content_type === 'text') {
const escaped = currentFrame.content
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;');
html = `<!DOCTYPE html><html><head><style>body{margin:1rem;font-family:monospace;color:#e0e0e0;background:#1a1a2e;white-space:pre-wrap;}</style></head><body>${escaped}</body></html>`;
}
doc.open();
doc.write(html);
doc.close();
}, [currentFrame]);
const handleSwitchCanvas = () => {
if (canvasIdInput.trim()) {
setCanvasId(canvasIdInput.trim());
setCurrentFrame(null);
setHistory([]);
}
};
const handleClear = async () => {
try {
await apiFetch(`/api/canvas/${encodeURIComponent(canvasId)}`, {
method: 'DELETE',
});
setCurrentFrame(null);
setHistory([]);
} catch {
// ignore
}
};
const handleSelectHistoryFrame = (frame: CanvasFrame) => {
setCurrentFrame(frame);
};
return (
<div className="p-6 space-y-4 h-full flex flex-col">
{/* Header */}
<div className="flex items-center justify-between">
<div className="flex items-center gap-3">
<Monitor className="h-6 w-6" style={{ color: 'var(--pc-accent)' }} />
<h1 className="text-xl font-semibold" style={{ color: 'var(--pc-text-primary)' }}>
Live Canvas
</h1>
<span
className="text-xs px-2 py-0.5 rounded-full font-medium"
style={{
background: connected ? 'rgba(34, 197, 94, 0.15)' : 'rgba(239, 68, 68, 0.15)',
color: connected ? '#22c55e' : '#ef4444',
}}
>
{connected ? 'Connected' : 'Disconnected'}
</span>
</div>
<div className="flex items-center gap-2">
<button
onClick={() => setShowHistory(!showHistory)}
className="p-2 rounded-lg transition-colors hover:opacity-80"
style={{ background: 'var(--pc-bg-elevated)', color: 'var(--pc-text-muted)' }}
title="Toggle history"
>
<History className="h-4 w-4" />
</button>
<button
onClick={handleClear}
className="p-2 rounded-lg transition-colors hover:opacity-80"
style={{ background: 'var(--pc-bg-elevated)', color: 'var(--pc-text-muted)' }}
title="Clear canvas"
>
<Trash2 className="h-4 w-4" />
</button>
<button
onClick={() => connectWs(canvasId)}
className="p-2 rounded-lg transition-colors hover:opacity-80"
style={{ background: 'var(--pc-bg-elevated)', color: 'var(--pc-text-muted)' }}
title="Reconnect"
>
<RefreshCw className="h-4 w-4" />
</button>
</div>
</div>
{/* Canvas selector */}
<div className="flex items-center gap-2">
<input
type="text"
value={canvasIdInput}
onChange={(e) => setCanvasIdInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && handleSwitchCanvas()}
placeholder="Canvas ID"
className="px-3 py-1.5 rounded-lg text-sm border"
style={{
background: 'var(--pc-bg-elevated)',
borderColor: 'var(--pc-border)',
color: 'var(--pc-text-primary)',
}}
/>
<button
onClick={handleSwitchCanvas}
className="px-3 py-1.5 rounded-lg text-sm font-medium"
style={{ background: 'var(--pc-accent)', color: '#fff' }}
>
Switch
</button>
{canvasList.length > 0 && (
<div className="flex items-center gap-1 ml-2">
<span className="text-xs" style={{ color: 'var(--pc-text-muted)' }}>Active:</span>
{canvasList.map((id) => (
<button
key={id}
onClick={() => {
setCanvasIdInput(id);
setCanvasId(id);
setCurrentFrame(null);
setHistory([]);
}}
className="px-2 py-0.5 rounded text-xs font-mono transition-colors"
style={{
background: id === canvasId ? 'var(--pc-accent-dim)' : 'var(--pc-bg-elevated)',
color: id === canvasId ? 'var(--pc-accent)' : 'var(--pc-text-muted)',
borderColor: 'var(--pc-border)',
}}
>
{id}
</button>
))}
</div>
)}
</div>
{/* Main content area */}
<div className="flex-1 flex gap-4 min-h-0">
{/* Canvas viewer */}
<div
className="flex-1 rounded-lg border overflow-hidden"
style={{ borderColor: 'var(--pc-border)', background: '#1a1a2e' }}
>
{currentFrame ? (
<iframe
ref={iframeRef}
sandbox="allow-scripts"
className="w-full h-full border-0"
title={`Canvas: ${canvasId}`}
style={{ background: '#1a1a2e' }}
/>
) : (
<div className="flex items-center justify-center h-full">
<div className="text-center">
<Monitor
className="h-12 w-12 mx-auto mb-3 opacity-30"
style={{ color: 'var(--pc-text-muted)' }}
/>
<p className="text-sm" style={{ color: 'var(--pc-text-muted)' }}>
Waiting for content on canvas <span className="font-mono">"{canvasId}"</span>
</p>
<p className="text-xs mt-1" style={{ color: 'var(--pc-text-muted)', opacity: 0.6 }}>
The agent can push content here using the canvas tool
</p>
</div>
</div>
)}
</div>
{/* History panel */}
{showHistory && (
<div
className="w-64 rounded-lg border overflow-y-auto"
style={{
borderColor: 'var(--pc-border)',
background: 'var(--pc-bg-elevated)',
}}
>
<div
className="px-3 py-2 border-b text-xs font-medium sticky top-0"
style={{
borderColor: 'var(--pc-border)',
background: 'var(--pc-bg-elevated)',
color: 'var(--pc-text-muted)',
}}
>
Frame History ({history.length})
</div>
{history.length === 0 ? (
<p className="p-3 text-xs" style={{ color: 'var(--pc-text-muted)' }}>
No frames yet
</p>
) : (
<div className="space-y-1 p-2">
{[...history].reverse().map((frame) => (
<button
key={frame.frame_id}
onClick={() => handleSelectHistoryFrame(frame)}
className="w-full text-left px-2 py-1.5 rounded text-xs transition-colors"
style={{
background:
currentFrame?.frame_id === frame.frame_id
? 'var(--pc-accent-dim)'
: 'transparent',
color: 'var(--pc-text-primary)',
}}
>
<div className="flex items-center justify-between">
<span className="font-mono truncate" style={{ color: 'var(--pc-accent)' }}>
{frame.content_type}
</span>
<span style={{ color: 'var(--pc-text-muted)' }}>
{new Date(frame.timestamp).toLocaleTimeString()}
</span>
</div>
<div
className="truncate mt-0.5"
style={{ color: 'var(--pc-text-muted)', fontSize: '0.65rem' }}
>
{frame.content.substring(0, 60)}
{frame.content.length > 60 ? '...' : ''}
</div>
</button>
))}
</div>
)}
</div>
)}
</div>
{/* Frame info bar */}
{currentFrame && (
<div
className="flex items-center justify-between px-3 py-1.5 rounded-lg text-xs"
style={{ background: 'var(--pc-bg-elevated)', color: 'var(--pc-text-muted)' }}
>
<span>
Type: <span className="font-mono">{currentFrame.content_type}</span> | Frame:{' '}
<span className="font-mono">{currentFrame.frame_id.substring(0, 8)}</span>
</span>
<span>{new Date(currentFrame.timestamp).toLocaleString()}</span>
</div>
)}
</div>
);
}
+126 -10
View File
@@ -1,4 +1,4 @@
import { useState, useEffect } from 'react';
import { useState, useEffect, useRef, useCallback } from 'react';
import {
Settings,
Save,
@@ -9,6 +9,91 @@ import {
import { getConfig, putConfig } from '@/lib/api';
import { t } from '@/lib/i18n';
// ---------------------------------------------------------------------------
// Lightweight zero-dependency TOML syntax highlighter.
// Produces an HTML string. The <pre> overlay sits behind the <textarea> so
// the textarea remains the editable surface; the pre just provides colour.
// ---------------------------------------------------------------------------
function highlightToml(raw: string): string {
const lines = raw.split('\n');
const result: string[] = [];
for (const line of lines) {
const escaped = line
.replace(/&/g, '&amp;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;');
// Section header [section] or [[array]]
if (/^\s*\[/.test(escaped)) {
result.push(`<span style="color:#67e8f9;font-weight:600">${escaped}</span>`);
continue;
}
// Comment line
if (/^\s*#/.test(escaped)) {
result.push(`<span style="color:#52525b;font-style:italic">${escaped}</span>`);
continue;
}
// Key = value line
const kvMatch = escaped.match(/^(\s*)([A-Za-z0-9_\-.]+)(\s*=\s*)(.*)$/);
if (kvMatch) {
const [, indent, key, eq, rawValue] = kvMatch;
const value = colorValue(rawValue ?? '');
result.push(
`${indent}<span style="color:#a78bfa">${key}</span>`
+ `<span style="color:#71717a">${eq}</span>${value}`
);
continue;
}
result.push(escaped);
}
return result.join('\n') + '\n';
}
function colorValue(v: string): string {
const trimmed = v.trim();
const commentIdx = findUnquotedHash(trimmed);
if (commentIdx !== -1) {
const valueCore = trimmed.slice(0, commentIdx).trimEnd();
const comment = `<span style="color:#52525b;font-style:italic">${trimmed.slice(commentIdx)}</span>`;
const leading = v.slice(0, v.indexOf(trimmed));
return leading + colorScalar(valueCore) + ' ' + comment;
}
return colorScalar(v);
}
function findUnquotedHash(s: string): number {
let inSingle = false;
let inDouble = false;
for (let i = 0; i < s.length; i++) {
const c = s[i];
if (c === "'" && !inDouble) inSingle = !inSingle;
else if (c === '"' && !inSingle) inDouble = !inDouble;
else if (c === '#' && !inSingle && !inDouble) return i;
}
return -1;
}
function colorScalar(v: string): string {
const t = v.trim();
if (t === 'true' || t === 'false')
return `<span style="color:#34d399">${v}</span>`;
if (/^-?\d[\d_]*(\.[\d_]*)?([eE][+-]?\d+)?$/.test(t))
return `<span style="color:#fbbf24">${v}</span>`;
if (t.startsWith('"') || t.startsWith("'"))
return `<span style="color:#86efac">${v}</span>`;
if (t.startsWith('[') || t.startsWith('{'))
return `<span style="color:#e2e8f0">${v}</span>`;
if (/^\d{4}-\d{2}-\d{2}/.test(t))
return `<span style="color:#fb923c">${v}</span>`;
return v;
}
export default function Config() {
const [config, setConfig] = useState('');
const [loading, setLoading] = useState(true);
@@ -16,6 +101,16 @@ export default function Config() {
const [error, setError] = useState<string | null>(null);
const [success, setSuccess] = useState<string | null>(null);
const textareaRef = useRef<HTMLTextAreaElement>(null);
const preRef = useRef<HTMLPreElement>(null);
const syncScroll = useCallback(() => {
if (preRef.current && textareaRef.current) {
preRef.current.scrollTop = textareaRef.current.scrollTop;
preRef.current.scrollLeft = textareaRef.current.scrollLeft;
}
}, []);
useEffect(() => {
getConfig()
.then((data) => { setConfig(typeof data === 'string' ? data : JSON.stringify(data, null, 2)); })
@@ -53,7 +148,7 @@ export default function Config() {
}
return (
<div className="p-6 space-y-6 animate-fade-in">
<div className="flex flex-col h-full p-6 gap-6 animate-fade-in overflow-hidden">
{/* Header */}
<div className="flex items-center justify-between">
<div className="flex items-center gap-2">
@@ -95,7 +190,7 @@ export default function Config() {
)}
{/* Config Editor */}
<div className="card overflow-hidden rounded-2xl">
<div className="card overflow-hidden rounded-2xl flex flex-col flex-1 min-h-0">
<div className="flex items-center justify-between px-4 py-2.5 border-b" style={{ borderColor: 'var(--pc-border)', background: 'var(--pc-accent-glow)' }}>
<span className="text-[10px] font-semibold uppercase tracking-wider" style={{ color: 'var(--pc-text-muted)' }}>
{t('config.toml_label')}
@@ -104,13 +199,34 @@ export default function Config() {
{config.split('\n').length} {t('config.lines')}
</span>
</div>
<textarea
value={config}
onChange={(e) => setConfig(e.target.value)}
spellCheck={false}
className="w-full min-h-[500px] text-sm p-4 resize-y focus:outline-none font-mono"
style={{ background: 'var(--pc-bg-base)', color: 'var(--pc-text-secondary)', tabSize: 4 }}
/>
<div className="relative flex-1 min-h-0 overflow-hidden">
<pre
ref={preRef}
aria-hidden="true"
className="absolute inset-0 text-sm p-4 font-mono overflow-auto whitespace-pre pointer-events-none m-0"
style={{ background: 'var(--pc-bg-base)', tabSize: 4 }}
dangerouslySetInnerHTML={{ __html: highlightToml(config) }}
/>
<textarea
ref={textareaRef}
value={config}
onChange={(e) => setConfig(e.target.value)}
onScroll={syncScroll}
onKeyDown={(e) => {
if (e.key === 'Tab') {
e.preventDefault();
const el = e.currentTarget;
const start = el.selectionStart;
const end = el.selectionEnd;
setConfig(config.slice(0, start) + ' ' + config.slice(end));
requestAnimationFrame(() => { el.selectionStart = el.selectionEnd = start + 2; });
}
}}
spellCheck={false}
className="absolute inset-0 w-full h-full text-sm p-4 resize-none focus:outline-none font-mono caret-white"
style={{ background: 'transparent', color: 'transparent', tabSize: 4 }}
/>
</div>
</div>
</div>
);
+2 -2
View File
@@ -293,7 +293,7 @@ export default function Cron() {
}
return (
<div className="p-6 space-y-6 animate-fade-in">
<div className="flex flex-col h-full p-6 gap-6 animate-fade-in overflow-hidden">
{/* Header */}
<div className="flex items-center justify-between">
<div className="flex items-center gap-2">
@@ -415,7 +415,7 @@ export default function Cron() {
<p style={{ color: 'var(--pc-text-muted)' }}>{t('cron.empty')}</p>
</div>
) : (
<div className="card overflow-x-auto rounded-2xl">
<div className="card overflow-auto rounded-2xl flex-1 min-h-0">
<table className="table-electric">
<thead>
<tr>
+2 -2
View File
@@ -236,7 +236,7 @@ export default function Dashboard() {
className="text-sm font-semibold uppercase tracking-wider"
style={{ color: "var(--pc-text-primary)" }}
>
{t("dashboard.active_channels")}
{t("dashboard.channels")}
</h2>
<button
onClick={() => setShowAllChannels((v) => !v)}
@@ -265,7 +265,7 @@ export default function Dashboard() {
: t("dashboard.filter_active")}
</button>
</div>
<div className="space-y-2">
<div className="space-y-2 overflow-y-auto max-h-48 pr-1">
{Object.entries(status.channels).length === 0 ? (
<p className="text-sm" style={{ color: "var(--pc-text-faint)" }}>
{t("dashboard.no_channels")}
+39 -11
View File
@@ -24,7 +24,13 @@ function eventTypeStyle(type: string): { color: string; bg: string; border: stri
return { color: 'var(--color-status-warning)', bg: 'rgba(255, 170, 0, 0.06)', border: 'rgba(255, 170, 0, 0.2)' };
case 'tool_call':
case 'tool_result':
case 'tool_call_start':
return { color: '#a78bfa', bg: 'rgba(167, 139, 250, 0.06)', border: 'rgba(167, 139, 250, 0.2)' };
case 'llm_request':
return { color: '#38bdf8', bg: 'rgba(56, 189, 248, 0.06)', border: 'rgba(56, 189, 248, 0.2)' };
case 'agent_start':
case 'agent_end':
return { color: '#34d399', bg: 'rgba(52, 211, 153, 0.06)', border: 'rgba(52, 211, 153, 0.2)' };
case 'message':
case 'chat':
return { color: 'var(--pc-accent)', bg: 'var(--pc-accent-glow)', border: 'var(--pc-accent-dim)' };
@@ -43,6 +49,7 @@ export default function Logs() {
const [paused, setPaused] = useState(false);
const [connected, setConnected] = useState(false);
const [autoScroll, setAutoScroll] = useState(true);
const [infoDismissed, setInfoDismissed] = useState(false);
const [typeFilters, setTypeFilters] = useState<Set<string>>(new Set());
const containerRef = useRef<HTMLDivElement>(null);
const sseRef = useRef<SSEClient | null>(null);
@@ -121,21 +128,12 @@ export default function Logs() {
const filteredEntries = typeFilters.size === 0 ? entries : entries.filter((e) => typeFilters.has(e.event.type));
return (
<div className="flex flex-col h-[calc(100vh-3.5rem)]">
<div className="flex flex-col h-full">
{/* Toolbar */}
<div className="flex items-center justify-between px-6 py-3 border-b animate-fade-in" style={{ borderColor: 'var(--pc-border)', background: 'var(--pc-bg-surface)' }}>
<div className="flex items-center gap-3">
<Activity className="h-5 w-5" style={{ color: 'var(--pc-accent)' }} />
<h2 className="text-sm font-semibold uppercase tracking-wider" style={{ color: 'var(--pc-text-primary)' }}>{t('logs.live_logs')}</h2>
<div className="flex items-center gap-2 ml-2">
<span className="status-dot" style={
connected ? { background: 'var(--color-status-success)', boxShadow: '0 0 6px var(--color-status-success)' } : { background: 'var(--color-status-error)', boxShadow: '0 0 6px var(--color-status-error)' }
}
/>
<span className="text-[10px]" style={{ color: 'var(--pc-text-faint)' }}>
{connected ? t('logs.connected') : t('logs.disconnected')}
</span>
</div>
<span className="text-[10px] font-mono ml-2" style={{ color: 'var(--pc-text-faint)' }}>
{filteredEntries.length} {t('logs.events')}
</span>
@@ -195,11 +193,32 @@ export default function Logs() {
</div>
)}
{/* Informational banner — what appears here and what does not */}
{!infoDismissed && (
<div className="flex items-start gap-3 px-6 py-3 border-b flex-shrink-0" style={{ borderColor: 'rgba(56, 189, 248, 0.2)', background: 'rgba(56, 189, 248, 0.05)' }}>
<div className="flex-1 text-xs" style={{ color: 'var(--pc-text-secondary)' }}>
<span className="font-semibold" style={{ color: '#38bdf8' }}>What appears here: </span>
agent activity over SSE LLM requests, tool calls, agent start/end, and errors.
{' '}<span className="font-semibold" style={{ color: 'var(--pc-text-muted)' }}>What does not: </span>
daemon stdout and <code>RUST_LOG</code> tracing output go to the terminal or log file, not this stream.
{' '}To see tracing logs, run the daemon with <code>RUST_LOG=info zeroclaw</code> and check your terminal.
</div>
<button
onClick={() => setInfoDismissed(true)}
className="flex-shrink-0 text-[10px] btn-icon"
aria-label="Dismiss"
style={{ color: 'var(--pc-text-faint)' }}
>
</button>
</div>
)}
{/* Log entries */}
<div
ref={containerRef}
onScroll={handleScroll}
className="flex-1 overflow-y-auto p-4 space-y-2"
className="flex-1 overflow-y-auto p-4 space-y-2 min-h-0"
>
{filteredEntries.length === 0 ? (
<div className="flex flex-col items-center justify-center h-full animate-fade-in" style={{ color: 'var(--pc-text-muted)' }}>
@@ -249,6 +268,15 @@ export default function Logs() {
})
)}
</div>
{/* Footer: connection status */}
<div className="flex items-center justify-center gap-2 px-6 py-2 border-t flex-shrink-0" style={{ borderColor: 'var(--pc-border)', background: 'var(--pc-bg-surface)' }}>
<span className="status-dot" style={
connected ? { background: 'var(--color-status-success)', boxShadow: '0 0 6px var(--color-status-success)' } : { background: 'var(--color-status-error)', boxShadow: '0 0 6px var(--color-status-error)' }
} />
<span className="text-[10px]" style={{ color: 'var(--pc-text-faint)' }}>
{connected ? t('logs.connected') : t('logs.disconnected')}
</span>
</div>
</div>
);
}
+110 -83
View File
@@ -6,6 +6,7 @@ import {
ChevronRight,
Terminal,
Package,
ChevronsUpDown,
} from 'lucide-react';
import type { ToolSpec, CliTool } from '@/types/api';
import { getTools, getCliTools } from '@/lib/api';
@@ -16,6 +17,8 @@ export default function Tools() {
const [cliTools, setCliTools] = useState<CliTool[]>([]);
const [search, setSearch] = useState('');
const [expandedTool, setExpandedTool] = useState<string | null>(null);
const [agentSectionOpen, setAgentSectionOpen] = useState(true);
const [cliSectionOpen, setCliSectionOpen] = useState(true);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
@@ -70,104 +73,128 @@ export default function Tools() {
{/* Agent Tools Grid */}
<div>
<div className="flex items-center gap-2 mb-4">
<button
onClick={() => setAgentSectionOpen((v) => !v)}
className="flex items-center gap-2 mb-4 w-full text-left group"
style={{ background: 'transparent', border: 'none', cursor: 'pointer', padding: 0 }}
aria-expanded={agentSectionOpen}
aria-controls="agent-tools-section"
>
<Wrench className="h-5 w-5" style={{ color: 'var(--pc-accent)' }} />
<h2 className="text-sm font-semibold uppercase tracking-wider" style={{ color: 'var(--pc-text-primary)' }}>
<span className="text-sm font-semibold uppercase tracking-wider flex-1" role="heading" aria-level={2} style={{ color: 'var(--pc-text-primary)' }}>
{t('tools.agent_tools')} ({filtered.length})
</h2>
</div>
</span>
<ChevronDown
className="h-4 w-4 opacity-40 group-hover:opacity-100"
style={{ color: 'var(--pc-text-muted)', transform: agentSectionOpen ? 'rotate(0deg)' : 'rotate(-90deg)', transition: 'transform 0.2s ease, opacity 0.2s ease' }}
/>
</button>
{filtered.length === 0 ? (
<p className="text-sm" style={{ color: 'var(--pc-text-muted)' }}>{t('tools.empty')}</p>
) : (
<div className="grid grid-cols-1 md:grid-cols-2 xl:grid-cols-3 gap-4 stagger-children">
{filtered.map((tool) => {
const isExpanded = expandedTool === tool.name;
return (
<div
key={tool.name}
className="card overflow-hidden animate-slide-in-up"
>
<button
onClick={() => setExpandedTool(isExpanded ? null : tool.name)}
className="w-full text-left p-4 transition-all"
style={{ background: 'transparent' }}
onMouseEnter={(e) => { e.currentTarget.style.background = 'var(--pc-hover)'; }}
onMouseLeave={(e) => { e.currentTarget.style.background = 'transparent'; }}
<div id="agent-tools-section">
{agentSectionOpen && (filtered.length === 0 ? (
<p className="text-sm" style={{ color: 'var(--pc-text-muted)' }}>{t('tools.empty')}</p>
) : (
<div className="grid grid-cols-1 md:grid-cols-2 xl:grid-cols-3 gap-4 stagger-children">
{filtered.map((tool) => {
const isExpanded = expandedTool === tool.name;
return (
<div
key={tool.name}
className="card overflow-hidden animate-slide-in-up"
>
<div className="flex items-start justify-between gap-2">
<div className="flex items-center gap-2 min-w-0">
<Package className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-accent)' }} />
<h3 className="text-sm font-semibold truncate" style={{ color: 'var(--pc-text-primary)' }}>{tool.name}</h3>
<button
onClick={() => setExpandedTool(isExpanded ? null : tool.name)}
className="w-full text-left p-4 transition-all"
style={{ background: 'transparent' }}
onMouseEnter={(e) => { e.currentTarget.style.background = 'var(--pc-hover)'; }}
onMouseLeave={(e) => { e.currentTarget.style.background = 'transparent'; }}
>
<div className="flex items-start justify-between gap-2">
<div className="flex items-center gap-2 min-w-0">
<Package className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-accent)' }} />
<h3 className="text-sm font-semibold truncate" style={{ color: 'var(--pc-text-primary)' }}>{tool.name}</h3>
</div>
{isExpanded
? <ChevronDown className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-accent)' }} />
: <ChevronRight className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-text-faint)' }} />
}
</div>
{isExpanded
? <ChevronDown className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-accent)' }} />
: <ChevronRight className="h-4 w-4 flex-shrink-0" style={{ color: 'var(--pc-text-faint)' }} />
}
</div>
<p className="text-sm mt-2 line-clamp-2" style={{ color: 'var(--pc-text-muted)' }}>
{tool.description}
</p>
</button>
{isExpanded && tool.parameters && (
<div className="border-t p-4 animate-fade-in" style={{ borderColor: 'var(--pc-border)' }}>
<p className="text-[10px] font-semibold uppercase tracking-wider mb-2" style={{ color: 'var(--pc-text-muted)' }}>
{t('tools.parameter_schema')}
<p className="text-sm mt-2 line-clamp-2" style={{ color: 'var(--pc-text-muted)' }}>
{tool.description}
</p>
<pre className="text-xs rounded-xl p-3 overflow-x-auto max-h-64 overflow-y-auto font-mono" style={{ background: 'var(--pc-bg-base)', color: 'var(--pc-text-secondary)' }}>
{JSON.stringify(tool.parameters, null, 2)}
</pre>
</div>
)}
</div>
);
})}
</div>
)}
</button>
{isExpanded && tool.parameters && (
<div className="border-t p-4 animate-fade-in" style={{ borderColor: 'var(--pc-border)' }}>
<p className="text-[10px] font-semibold uppercase tracking-wider mb-2" style={{ color: 'var(--pc-text-muted)' }}>
{t('tools.parameter_schema')}
</p>
<pre className="text-xs rounded-xl p-3 overflow-x-auto max-h-64 overflow-y-auto font-mono" style={{ background: 'var(--pc-bg-base)', color: 'var(--pc-text-secondary)' }}>
{JSON.stringify(tool.parameters, null, 2)}
</pre>
</div>
)}
</div>
);
})}
</div>
))}
</div>
</div>
{/* CLI Tools Section */}
{filteredCli.length > 0 && (
<div className="animate-slide-in-up" style={{ animationDelay: '200ms' }}>
<div className="flex items-center gap-2 mb-4">
<button
onClick={() => setCliSectionOpen((v) => !v)}
className="flex items-center gap-2 mb-4 w-full text-left group"
style={{ background: 'transparent', border: 'none', cursor: 'pointer', padding: 0 }}
aria-expanded={cliSectionOpen}
aria-controls="cli-tools-section"
>
<Terminal className="h-5 w-5" style={{ color: 'var(--color-status-success)' }} />
<h2 className="text-sm font-semibold uppercase tracking-wider" style={{ color: 'var(--pc-text-primary)' }}>
<span className="text-sm font-semibold uppercase tracking-wider flex-1" role="heading" aria-level={2} style={{ color: 'var(--pc-text-primary)' }}>
{t('tools.cli_tools')} ({filteredCli.length})
</h2>
</div>
</span>
<ChevronDown
className="h-4 w-4 opacity-40 group-hover:opacity-100"
style={{ color: 'var(--pc-text-muted)', transform: cliSectionOpen ? 'rotate(0deg)' : 'rotate(-90deg)', transition: 'transform 0.2s ease, opacity 0.2s ease' }}
/>
</button>
<div className="card overflow-hidden rounded-2xl">
<table className="table-electric">
<thead>
<tr>
<th>{t('tools.name')}</th>
<th>{t('tools.path')}</th>
<th>{t('tools.version')}</th>
<th>{t('tools.category')}</th>
</tr>
</thead>
<tbody>
{filteredCli.map((tool) => (
<tr key={tool.name}>
<td className="font-medium text-sm" style={{ color: 'var(--pc-text-primary)' }}>
{tool.name}
</td>
<td className="font-mono text-xs truncate max-w-[200px]" style={{ color: 'var(--pc-text-muted)' }}>
{tool.path}
</td>
<td style={{ color: 'var(--pc-text-muted)' }}>
{tool.version ?? '-'}
</td>
<td>
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-[10px] font-semibold capitalize border" style={{ borderColor: 'var(--pc-border)', color: 'var(--pc-text-secondary)', background: 'var(--pc-accent-glow)' }}>
{tool.category}
</span>
</td>
<div id="cli-tools-section">
{cliSectionOpen && <div className="card overflow-hidden rounded-2xl">
<table className="table-electric">
<thead>
<tr>
<th>{t('tools.name')}</th>
<th>{t('tools.path')}</th>
<th>{t('tools.version')}</th>
<th>{t('tools.category')}</th>
</tr>
))}
</tbody>
</table>
</thead>
<tbody>
{filteredCli.map((tool) => (
<tr key={tool.name}>
<td className="font-medium text-sm" style={{ color: 'var(--pc-text-primary)' }}>
{tool.name}
</td>
<td className="font-mono text-xs truncate max-w-[200px]" style={{ color: 'var(--pc-text-muted)' }}>
{tool.path}
</td>
<td style={{ color: 'var(--pc-text-muted)' }}>
{tool.version ?? '-'}
</td>
<td>
<span className="inline-flex items-center px-2.5 py-0.5 rounded-full text-[10px] font-semibold capitalize border" style={{ borderColor: 'var(--pc-border)', color: 'var(--pc-text-secondary)', background: 'var(--pc-accent-glow)' }}>
{tool.category}
</span>
</td>
</tr>
))}
</tbody>
</table>
</div>}
</div>
</div>
)}