* feat(config): add google workspace operation allowlists
* docs(superpowers): link google workspace operation inventory sources
* docs(superpowers): verify starter operation examples
* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test
- Remove the duplicated credentials_path, default_account, and audit_log
blocks that were copy-pasted into execute() — they were idempotent but
misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
match the trim applied during Config::validate(), preventing a mismatch
where a config entry with surrounding whitespace would pass validation but
never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
test to cover the duplicate-method validation path that was implemented but
untested.
* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented
- HIGH: extract and validate sub_resource before the allowlist check;
is_operation_allowed() now accepts Option<&str> for sub_resource and
returns false (fail-closed) when allowed_operations is non-empty and
a sub_resource is present — prevents nested gws calls such as
`drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured
* docs(google_workspace): document sub_resource limitation and add config-reference entries
Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
(sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1
Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
sub-table, sub-resource limitation note, and TOML example
* fix(docs): add classroom and events to allowed_services list in all config-reference locales
* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands
All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.
Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key
Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
dimensions including the optional sub_resource
Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes
Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
table; update Gmail examples to correct 4-segment shapes
Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape
* fix(google_workspace): add classroom and events to service parameter schema description
* fix(google_workspace): cross-validate allowed_operations service against allowed_services
When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.
Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.
Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
from a validation requirement to a runtime matching requirement — the validator
does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
when allowed_services is empty
* fix(google_workspace): cross-validate allowed_operations.service against effective service set
When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.
Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.
* fix(google_workspace): update test assertion for new error message wording
* docs(google_workspace): fix stale 3-segment gmail example in TDD plan
* fix(google_workspace): address adversarial review round 4 findings
- Error message for denied operations now includes sub_resource when
present, so gmail/users/messages/send and gmail/users/drafts/send
produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
construction time in new(). Runtime comparisons now use plain equality
instead of .trim() on every call, removing the latent defect where a
future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
and service/resource/method checks now both require lowercase alphanumeric
plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
(Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.
* fix(google_workspace): wire production helpers to close test coverage gaps
- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
same function the arg-ordering tests exercise, so a drift in the real
command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
method used by execute(). Replace the brittle page_limit_without_page_all
test (which relied on environment-specific execution failure wording)
with four direct assertions on build_pagination_args covering all
page_all/page_limit combinations.
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(tools): add text browser tool for headless environments (#3879)
* fix(tools): remove redundant match arm in text_browser clippy lint
* ci: trigger fresh workflow run
Add tool descriptions alongside names in the deferred tools section so
the LLM can better identify which tool to activate via tool_search.
Original author: @mark-linyb
* feat(delegate): make delegate timeout configurable via config.toml
Add configurable timeout options for delegate tool:
- timeout_secs: for non-agentic sub-agent calls (default: 120s)
- agentic_timeout_secs: for agentic sub-agent runs (default: 300s)
Previously these were hardcoded constants (DELEGATE_TIMEOUT_SECS
and DELEGATE_AGENTIC_TIMEOUT_SECS). Users can now customize them
in config.toml under [[delegate.agents]] section.
Fixes#3898
* feat(config): make delegate tool timeouts configurable via config.toml
This change makes the hardcoded 120s/300s delegate tool timeouts
configurable through the config file:
- Add [delegate] section to Config with timeout_secs and agentic_timeout_secs
- Add DelegateToolConfig struct for global default timeout values
- Add DEFAULT_DELEGATE_TIMEOUT_SECS (120) and DEFAULT_DELEGATE_AGENTIC_TIMEOUT_SECS (300) constants
- Remove hardcoded constants from delegate.rs
- Update tests to use constant values instead of magic numbers
- Update examples/config.example.toml with documentation
Closes#3898
* fix: keep delegate timeout fields as Option<u64> with global fallback
- Change DelegateAgentConfig.timeout_secs and agentic_timeout_secs from
u64 to Option<u64> so per-agent overrides are truly optional
- Implement manual Default for DelegateToolConfig with correct values
(120s and 300s) instead of derive(Default)
- Add DelegateToolConfig to DelegateTool struct and wire through
constructors so agent timeouts fall back to global [delegate] config
- Add validation for delegate timeout values in Config::validate()
- Fix example config to use [agents.name] table syntax matching the
HashMap<String, DelegateAgentConfig> schema
- Add missing timeout fields to all DelegateAgentConfig struct literals
across codebase (doctor, swarm, model_routing_config, tools/mod)
* chore: trigger CI
* chore: retrigger CI
* fix: cargo fmt line wrapping in config/mod.rs
* fix: import timeout constants in delegate tests
* style: cargo fmt
---------
Co-authored-by: vincent067 <vincent067@outlook.com>
* feat: add Jira tool with get_ticket, search_tickets, and comment_ticket
Implements a new `jira` tool following the existing zeroclaw tool
conventions (Tool trait, SecurityPolicy, config-gated registration).
- get_ticket: configurable detail level (basic/basic_search/full/changelog)
with response shaping; always in the default allowed_actions list
- search_tickets: JQL-based search with cursor pagination (nextPageToken);
always returns basic_search shape; gated by allowed_actions
- comment_ticket: posts ADF comments with inline markdown-like syntax —
@email mentions resolved to Jira accountId, **bold**, bullet lists,
newlines; gated by allowed_actions and SecurityPolicy Act operation
Config: [jira] section with base_url, email, api_token (encrypted at
rest, falls back to JIRA_API_TOKEN env var), allowed_actions (default:
["get_ticket"]), and timeout_secs. Validated on load.
Tool description in tool_descriptions/en.toml documents all three
actions and the full comment syntax for the AI system prompt.
* fix: address jira tool code review findings
High priority:
- Validate issue_key against ^[A-Z][A-Z0-9]+-\d+$ before URL interpolation
to prevent path traversal in get_ticket and comment_ticket
Medium priority:
- Add email guard in tool registration (mod.rs) to skip with a warning
instead of registering a broken tool when jira.email is empty
- Shape comment_ticket response to return only id, author, created —
avoids exposing internal Jira metadata to the AI
- Replace O(n²) comment matching in shape_basic with a HashMap lookup
keyed by comment ID for O(1) access
- Add api_token validation in Config::validate() checking both the
config field and JIRA_API_TOKEN env var when jira.enabled = true
Low priority:
- Make shape_basic_search private (was accidentally pub)
- Extend clean_email to strip leading punctuation (( and [) so that
@(john@co.com) resolves correctly; fix suffix computation via pointer
arithmetic to handle the shifted offset
- Clarify tool_descriptions/en.toml: @prefix is required for mentions,
bare emails without @ are treated as plain text
- Handle unmatched ** in parse_inline: emit as literal text instead of
silently producing a bold node with no closing marker
* fix(jira): allow lowercase project keys in issue_key validation
Relax validate_issue_key to accept both PROJ-123 and proj-123, since
some Jira instances use lowercase custom project keys. Path traversal
protection is preserved via alphanumeric + digit-number requirement.
* feat(tools): add tool honesty instructions to system prompt
Prevent AI from fabricating tool results by injecting a CRITICAL:
Tool Honesty section into both channel and CLI/agent system prompts.
Rules: never fabricate or guess tool results, report errors as-is,
and ask the user when unsure if a tool call succeeded.
* style: sort JiraConfig import alphabetically in config/mod.rs
* style(jira): fix strict clippy lints in jira_tool
- Derive Default for LevelOfDetails instead of manual impl
- Use char arrays in trim_start_matches/trim_end_matches
- Allow cast_possible_truncation on search_tickets (usize->u32 bounded by max_results)
- Remove needless borrow on &email
* fix(ci): adapt to upstream autonomy_level additions in channels/mod.rs
- Add missing autonomy_level argument to build_system_prompt_with_mode call in test
- Add missing autonomy_level field in ChannelRuntimeContext test initializer
- Allow large_futures in load_or_init test (Config struct growth from JiraConfig)
* fix(ci): resolve duplicate and missing autonomy_level in test initializers
* fix(ci): use TelegramRecordingChannel in telegram-specific test
The test process_channel_message_executes_tool_calls_instead_of_sending_raw_json
sent messages on channel "telegram" but registered RecordingChannel (name:
"test-channel"), causing the channel lookup to return None and no messages to
be sent.
* fix(jira): prevent panics on short dates, fix dedup bug, normalize base_url
- Add date_prefix() helper to safely slice date strings instead of
panicking on empty or short strings from the Jira API.
- Replace Vec::dedup() with HashSet-based retain in extract_emails()
to correctly deduplicate non-adjacent duplicates.
- Strip trailing slashes from base_url during construction to prevent
double-slash URLs.
- Add tests for date_prefix and non-adjacent email dedup.
---------
Co-authored-by: Anatolii <anatolii@Anatoliis-MacBook.local>
Co-authored-by: Anatolii <anatolii.fesiuk@gmail.com>
Persist allowed_tools in cron_jobs table, threading it through CLI add/update and cron_add/cron_update tool APIs. Add regression coverage for store, tool, and CLI roundtrip paths.
Fixups over original PR #3929: add allowed_tools to all_overdue_jobs SELECT (merge gap), resolve merge conflicts.
Closes#3920
Supersedes #3929
* fix: add sandbox field to ShellTool struct
Add `sandbox: Arc<dyn Sandbox>` field to `ShellTool` and a
`new_with_sandbox()` constructor so callers can inject the configured
sandbox backend. The existing `new()` constructor defaults to
`NoopSandbox` for backward compatibility.
Ref: #3983
* fix: apply sandbox wrapping in ShellTool::execute()
Call `self.sandbox.wrap_command()` on the underlying std::process::Command
(via `as_std_mut()`) after building the shell command and before clearing
the environment. This ensures every shell command passes through the
configured sandbox backend before execution.
Ref: #3983
* fix: wire up sandbox creation at ShellTool callsites
In `all_tools_with_runtime()`, create a sandbox from
`root_config.security` via `create_sandbox()` and pass it to
`ShellTool::new_with_sandbox()`. The `default_tools_with_runtime()`
path retains `ShellTool::new()` which defaults to `NoopSandbox`.
Ref: #3983
* test: add sandbox integration tests for ShellTool
Verify that ShellTool can be constructed with a sandbox via
`new_with_sandbox()`, that NoopSandbox leaves commands unmodified,
and that command execution works end-to-end with a sandbox attached.
Ref: #3983
Replace workspace_dir.join(path) with resolve_tool_path(path) in
file_write, file_edit, and pdf_read tools to correctly handle absolute
paths within the workspace directory, preventing path doubling.
Closes#3774
Add `timeout_secs` and `agentic_timeout_secs` fields to
`DelegateAgentConfig` so users can tune per-agent timeouts instead
of relying on the hardcoded 120s / 300s defaults.
Validation rejects values of 0 or above 3600s, matching the pattern
used by MCP timeout validation.
Closes#3898
When LLMs pass the schedule parameter as a JSON string instead of a JSON
object, serde fails with "invalid type: string, expected internally
tagged enum Schedule". Add a deserialize_maybe_stringified helper that
detects stringified JSON values and parses the inner string before
deserializing, providing backward compatibility for both object and
string representations.
Fixes#3860
The deferred MCP tools section in the system prompt only listed tool
names inside <available-deferred-tools> tags without any instruction
telling the LLM to call tool_search to activate them. In daemon and
Telegram mode, where conversations are shorter and less guided, the
LLM never discovered it should call tool_search, so deferred tools
were effectively unavailable.
Add a "## Deferred Tools" heading with explicit instructions that
the LLM MUST call tool_search before using any listed tool. This
ensures the LLM knows to activate deferred tools in all modes
(CLI, daemon, Telegram) consistently.
Also add tests covering:
- Instruction presence in the deferred section
- Multiple-server deferred tool search
- Cross-server keyword search ranking
- Activation persistence across multiple tool_search calls
- Idempotent re-activation
Add support for switching AI models at runtime during a conversation.
The model_switch tool allows users to:
- Get current model state
- List available providers
- List models for a provider
- Switch to a different model
The switch takes effect immediately for the current conversation by
recreating the provider with the new model after tool execution.
Risk: Medium - internal state changes and provider recreation
- Wire WASM plugin tools into all_tools_with_runtime() behind
cfg(feature = "plugins-wasm"), discovering and registering tool-capable
plugins from the configured plugins directory at startup.
- Add /api/plugins gateway endpoint (cfg-gated) for listing plugin status.
- Add mod plugins declaration to main.rs binary crate so crate::plugins
resolves when the feature is enabled.
- Add unit tests for PluginHost: empty dir, manifest discovery, capability
filtering, lookup, and removal.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat(runtime): add configurable reasoning effort
* fix(test): add missing reasoning_effort field in live test
Add reasoning_effort: None to ProviderRuntimeOptions construction in
openai_codex_vision_e2e.rs to fix E0063 compile error.
---------
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
* feat(tools): add native LinkedIn integration tool
Add a config-gated LinkedIn tool that enables ZeroClaw to interact with
LinkedIn's REST API via OAuth2. Supports creating posts, listing own
posts, commenting, reacting, deleting posts, viewing engagement stats,
and retrieving profile info.
Architecture:
- linkedin.rs: Tool trait impl with action-dispatched design
- linkedin_client.rs: OAuth2 token management and API wrappers
- Config-gated via [linkedin] enabled = false (default off)
- Credentials loaded from workspace .env file
- Automatic token refresh with line-targeted .env update
39 unit tests covering security enforcement, parameter validation,
credential parsing, and token management.
* feat(linkedin): configurable content strategy and API version
- Expand LinkedInConfig with api_version and nested LinkedInContentConfig
(rss_feeds, github_users, github_repos, topics, persona, instructions)
- Add get_content_strategy tool action so agents can read config at runtime
- Fix hardcoded LinkedIn API version 202402 (expired) → configurable,
defaulting to 202602
- LinkedInClient accepts api_version as parameter instead of static header
- 4 new tests (43 total), all passing
* feat(linkedin): add multi-provider image generation for posts
Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux)
and SVG fallback card. LinkedIn tool create_post now supports generate_image
parameter. Includes LinkedIn image upload (register → upload → reference),
configurable provider priority, and 14 new tests.
* feat(whatsapp): add voice note transcription and TTS voice replies
- Add STT support: download incoming voice notes via wa-rs, transcribe
with OpenAI Whisper (or Groq), send transcribed text to agent
- Add TTS support: synthesize agent replies to Opus audio via OpenAI
TTS, upload encrypted media, send as WhatsApp voice note (ptt=true)
- Voice replies only trigger when user sends a voice note; text
messages get text replies only. Flag is consumed after one use to
prevent multiple voice notes per agent turn
- Fix transcription module to support OpenAI API key (not just Groq):
auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN /
OPENAI_API_KEY / GROQ_API_KEY env vars in priority order
- Add optional api_key field to TranscriptionConfig for explicit key
- Add response_format: opus to OpenAI TTS for WhatsApp compatibility
- Add channel capability note so agent knows TTS is automatic
- Wire transcription + TTS config into WhatsApp Web channel builder
* fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key
When the Anthropic provider is used alongside a non-Anthropic primary
provider (e.g. custom: gateway), the global api_key would be passed
as credential override, bypassing provider-specific env vars. This
caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored
in favor of the unrelated gateway JWT.
Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and
ANTHROPIC_API_KEY env vars before falling back to the credential
override. This mirrors the existing MiniMax OAuth pattern and enables
subscription-based auth to work as a fallback provider.
* feat(linkedin): add scheduled post support via LinkedIn API
Add scheduled_at parameter to create_post and create_post_with_image.
When provided (RFC 3339 timestamp), the post is created as a DRAFT
with scheduledPublishOptions so LinkedIn publishes it automatically
at the specified time. This enables the cron job to schedule a week
of posts in advance directly on LinkedIn.
* fix(providers): prefer env vars for openai and groq credential resolution
Generalize the Anthropic OAuth fix to also cover openai and groq
providers. When used alongside a non-matching primary provider (e.g.
a custom: gateway), the global api_key would be passed as credential
override, causing auth failures. Now checks provider-specific env
vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the
credential override.
* fix(whatsapp): debounce voice replies to voice final answer only
The voice note TTS was triggering on the first send() call, which was
often intermediate tool output (URLs, JSON, web fetch results) rather
than the actual answer. This produced incomprehensible voice notes.
Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code)
in a pending_voice map. A spawned debounce task waits 4 seconds after
the last substantive message, then synthesizes and sends ONE voice
note with the final answer. Intermediate tool outputs are skipped.
This ensures the user hears the actual answer in the correct language,
not raw tool output in English.
* fix(whatsapp): voice in = voice out, text in = text out
Rewrite voice reply logic with clean separation:
- Voice note received: ALL text output suppressed. Latest message
accumulated silently. After 5s of no new messages, ONE voice note
sent with the final answer. No tool outputs, no text, just voice.
- Text received: normal text reply, no voice.
Atomic debounce: multiple spawned tasks race but only one can extract
the pending message (remove-inside-lock pattern). Prevents duplicate
voice notes.
* fix(whatsapp): voice replies send both text and voice note
Voice note in → text replies sent normally in real-time PLUS one
voice note with the final answer after 10s debounce. Only substantive
natural-language messages are voiced (tool outputs, URLs, JSON, code
blocks filtered out). Longer debounce (10s) ensures the agent
completes its full tool chain before the voice note fires.
Text in → text out only, no voice.
* fix(channels): suppress tool narration and ack reactions
- Add system prompt instruction telling the agent to NEVER narrate
tool usage (no "Let me fetch..." or "I will use http_request...")
- Disable ack_reactions (emoji reactions on incoming messages)
- Users see only the final answer, no intermediate steps
* docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md
Add PR template requirements, code naming conventions, architecture
boundary rules, validation commands, and branch naming guidance
directly to CLAUDE.md for AI assistant reference.
* fix(docs): add blank lines around headings in CLAUDE.md for markdown lint
* fix(channels): strengthen tool narration suppression and fix large_futures
- Move anti-narration instruction to top of channel system prompt
- Add emphatic instruction for WhatsApp/voice channels specifically
- Add outbound message filter to strip tool-call-like patterns (⏳, 🔧)
- Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
* feat(channels): add Reddit, Bluesky, and generic Webhook adapters
- Reddit: OAuth2 polling for mentions/DMs/replies, comment and DM sending
- Bluesky: AT Protocol session auth, notification polling, post replies
- Webhook: Axum HTTP server for inbound, configurable outbound POST/PUT
- All three follow existing channel patterns with tests
* fix(channels): use neutral test fixtures and improve test naming in webhook
* feat(tools): add Google Workspace CLI (gws) integration
Adds GoogleWorkspaceTool for interacting with Google Drive, Sheets,
Gmail, Calendar, Docs, and other Workspace services via CLI.
- Config-gated (google_workspace.enabled)
- Service allowlist for restricted access
- Requires shell access for CLI delegation
- Input validation against shell injection
- Wrong-type rejection for all optional parameters
- Config validation for allowed_services (empty, duplicate, malformed)
- Registered in integrations registry and CLI discovery
Closes#2986
* style: fix cargo fmt + clippy violations
* feat(google-workspace): expand config with auth, rate limits, and audit settings
* fix(tools): define missing GWS_TIMEOUT_SECS constant
* fix: Box::pin large futures and resolve duplicate Default impl
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(stt): add multi-provider STT with TranscriptionProvider trait
Refactors single-endpoint transcription to support multiple providers:
Groq (existing), OpenAI Whisper, Deepgram, AssemblyAI, and Google Cloud
Speech-to-Text. Adds TranscriptionManager for provider routing with
backward-compatible config fields.
* style: fix cargo fmt + clippy violations
* fix: Box::pin large futures and resolve merge conflicts with master
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(providers): add Claude Code, Gemini CLI, and KiloCLI subprocess providers
Adds three new local subprocess-based providers for AI CLI tools.
Each provider spawns the CLI as a child process, communicates via
stdin/stdout pipes, and parses responses into ChatResponse format.
* fix: resolve clippy unnecessary_debug_formatting and rustfmt violations
* fix: resolve remaining clippy unnecessary_debug_formatting in CLI providers
* fix(providers): add AiAgent CLI category for subprocess providers
The schedule field in cron_add used a bare {"type":"object"} with a
description string encoding a tagged union in pseudo-notation. The patch
field in cron_update was an opaque {"type":"object"} despite CronJobPatch
having nine fully-typed fields. Both gaps cause weaker instruction-following
models to produce malformed or missing nested JSON when invoking these tools.
Changes:
- cron_add: expand schedule into a oneOf discriminated union with explicit
properties and required fields for each variant (cron/at/every), matching
the Schedule enum in src/cron/types.rs exactly
- cron_add: add descriptions to all previously undocumented top-level fields
- cron_add: expand delivery from a bare inline comment to fully-specified
properties with per-field descriptions
- cron_update: expand patch from opaque object to full properties matching
CronJobPatch (name, enabled, command, prompt, model, session_target,
delete_after_run, schedule, delivery)
- cron_update: schedule inside patch mirrors the same oneOf expansion
- Both: add inline NOTE comments flagging that oneOf is correct for
OpenAI-compatible APIs but SchemaCleanr::clean_for_gemini must be
applied if Gemini native tool calling is ever wired up
- Both: add schema-shape tests using the existing test_config/test_security
helper pattern, covering oneOf variant structure, required fields, and
delivery channel enum completeness
No behavior changes. No new dependencies. Backward compatible: the runtime
deserialization path (serde on Schedule/CronJobPatch) is unchanged.
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
* fix(tools): wire ActivatedToolSet into tool dispatch and spec advertisement
When deferred MCP tools are activated via tool_search, they are stored
in ActivatedToolSet but never consulted by the tool call loop.
tool_specs is built once before the iteration loop and never refreshed,
so the provider API tools[] parameter never includes activated tools.
find_tool only searches the static registry, so execution dispatch also
fails silently.
Thread Arc<Mutex<ActivatedToolSet>> from creation sites through to
run_tool_call_loop. Rebuild tool_specs each iteration to merge base
registry specs with activated specs. Add fallback in execute_one_tool
to check the activated set when the static registry lookup misses.
Change ActivatedToolSet internal storage from Box<dyn Tool> to
Arc<dyn Tool> so we can clone the Arc out of the mutex guard before
awaiting tool.execute() (std::sync::MutexGuard is not Send).
* fix(tools): add activated_tools field to new ChannelRuntimeContext test site
* feat(tools): add browser delegation tool for corporate web app interaction
Adds BrowserDelegateTool that delegates browser-based tasks to Claude Code
(or other browser-capable CLIs) for interacting with corporate tools
(Teams, Outlook, Jira, Confluence) via browser automation. Includes domain
validation (allow/blocklist), task templates, Chrome profile persistence
for SSO sessions, and timeout management.
* fix: resolve clippy violation in browser delegation tool
* fix(browser-delegate): validate URLs embedded in task text against domain policy
Scan the task text for http(s):// URLs using regex and validate each
against the allow/block domain lists before forwarding to the browser
CLI subprocess. This prevents bypassing domain restrictions by
embedding blocked URLs in the task parameter.
* fix(browser-delegate): constrain URL schemes, gate on runtime, document config
- Add has_shell_access gate so BrowserDelegateTool is only registered on
shell-capable runtimes (skipped with warning on WASM/edge runtimes)
- Add boundary tests for javascript: and data: URL scheme rejection
- URL scheme validation (http/https only) and config docs were already
addressed by a prior commit on this branch
* fix(tools): address CodeRabbit review findings for browser delegation
Remove dead `max_concurrent_tasks` config field and expand doc comments
on the `[browser_delegate]` config section in schema.rs.
When the LLM hallucinates an invalid model ID through the
model_routing_config tool's set_default action, the invalid model gets
persisted to config.toml. The channel hot-reload then picks it up and
every subsequent message fails with a non-retryable 404, permanently
killing the connection with no user recovery path.
Fix with two layers of defense:
1. Tool probe-and-rollback: after saving the new model, send a minimal
chat request to verify the model is accessible. If the API returns a
non-retryable error (404, auth failure, etc.), automatically restore
the previous config and return a failure notice to the LLM.
2. Channel safety net: in maybe_apply_runtime_config_update, reject
config reloads when warmup fails with a non-retryable error instead
of applying the broken config anyway.
Co-authored-by: Christian Pojoni <christian.pojoni@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(tools): add cloud transformation accelerator tools
Add cloud_ops and cloud_patterns tools providing read-only cloud
transformation analysis: IaC review, migration assessment, cost
analysis, and Well-Architected Framework architecture review.
Includes CloudOpsConfig, SecurityOpsConfig, and ConversationalAiConfig
schema additions, Box::pin fixes for recursive async in cron scheduler,
and approval_manager field in ChannelRuntimeContext test constructors.
Original work by @rareba. Rebased on latest master with conflict
resolution (kept SwarmConfig/SwarmStrategy exports, swarm tool
registration, and approval_manager in test constructors).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* style: cargo fmt Box::pin calls
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add BackupTool for creating, listing, verifying, and restoring
timestamped workspace backups with SHA-256 manifest integrity
checking. Add DataManagementTool for retention status, time-based
purge, and storage statistics. Both tools are config-driven via
new BackupConfig and DataRetentionConfig sections.
Original work by @rareba. Rebased on latest master with conflict
resolution for SwarmConfig/SwarmStrategy exports and swarm tool
registration, and added missing approval_manager fields in
ChannelRuntimeContext test constructors.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat(security): add MCSS security operations tool
Add managed cybersecurity service (MCSS) tool with alert triage,
incident response playbook execution, vulnerability scan parsing,
and security report generation. Includes SecurityOpsConfig, playbook
engine with approval gating, vulnerability scoring, and full test
coverage. Also fixes pre-existing missing approval_manager field in
ChannelRuntimeContext test constructors.
Original work by @rareba. Supersedes #3599.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: add SecurityOpsConfig to re-exports, fix test constructors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add a new read-only project_intel tool that provides:
- Status report generation (weekly/sprint/month)
- Risk scanning with configurable sensitivity
- Client update drafting (formal/casual, client/internal)
- Sprint summary generation
- Heuristic effort estimation
Includes multi-language report templates (EN, DE, FR, IT),
ProjectIntelConfig schema with validation, and comprehensive tests.
Also fixes missing approval_manager field in 4 ChannelRuntimeContext
test constructors.
Supersedes #3591 — rebased on latest master. Original work by @rareba.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add Microsoft 365 tool providing access to Outlook mail, Teams messages,
Calendar events, OneDrive files, and SharePoint search via Microsoft
Graph API. Includes OAuth2 token caching (client credentials and device
code flows), security policy enforcement, and config validation.
Rebased on latest master, resolving conflicts with SwarmConfig exports
and adding approval_manager to ChannelRuntimeContext test constructors.
Original work by @rareba.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add Notion integration with two components:
- NotionChannel: polls a Notion database for tasks with configurable
status properties, concurrency limits, and stale task recovery
- NotionTool: provides CRUD operations (query_database, read_page,
create_page, update_page) for agent-driven Notion interactions
Includes config schema (NotionConfig), onboarding wizard support,
and full unit test coverage for both channel and tool.
Supersedes #3609 — rebased on latest master to resolve merge conflicts
with swarm feature additions in config/mod.rs.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Commit 811fab3b added is_service_environment() as a top-level function and
called it from two sites. The call at line 445 is at module scope and resolves
fine. The call at line 1473 is inside mod native_backend, which is a child
module — Rust does not implicitly import parent-scope items, so the unqualified
name fails with E0425 (cannot find function in this scope).
Fix: prefix the call with super:: so it resolves to the parent module's
function, matching how mod native_backend already imports other parent items
(e.g. use super::BrowserAction).
The browser-native feature flag is required to reproduce:
cargo check --features browser-native # fails without this fix
cargo check --features browser-native # clean with this fix
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
Add workspace profile management, security boundary enforcement, and
a workspace management tool for isolated client engagements.
Original work by @rareba. Supersedes #3597 — rebased on latest master.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When zeroclaw runs as a service, the process inherits a minimal
environment without HOME, DISPLAY, or user namespaces. Headless
browsers (Chromium/Firefox) need HOME for profile/cache dirs and
fail with sandbox errors without user namespaces.
- Detect service environment via INVOCATION_ID, JOURNAL_STREAM,
or missing HOME on Linux
- Auto-apply --no-sandbox and --disable-dev-shm-usage for Chrome
in service mode
- Set HOME fallback and CHROMIUM_FLAGS on agent-browser commands
- systemd unit: add Environment=HOME=%h and PassEnvironment
- OpenRC script: export HOME=/var/lib/zeroclaw with start_pre()
to create the directory
Closes#3584
The http_request tool unconditionally blocked all private/LAN hosts with
no opt-out, preventing legitimate use cases like calling a local Home
Assistant instance or internal APIs. This adds an `allow_private_hosts`
config flag (default: false) under `[http_request]` that, when set to
true, skips the private-host SSRF check while still enforcing the domain
allowlist.
Closes#3568
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a WebSocket endpoint at /ws/nodes where external processes and
devices can connect and advertise their capabilities at runtime.
The gateway tracks connected nodes in a NodeRegistry and exposes
their capabilities as dynamically available tools via NodeTool.
- Add src/gateway/nodes.rs: WebSocket endpoint, NodeRegistry, protocol
- Add src/tools/node_tool.rs: Tool trait wrapper for node capabilities
- Add NodesConfig to config schema (disabled by default)
- Wire /ws/nodes route into gateway router
- Add NodeRegistry to AppState and all test constructions
- Re-export NodesConfig and NodeTool from module roots
Closes#3093
* feat(provider): support custom API path suffix for custom: endpoints
Allow users to configure a custom API path for custom/compatible
providers instead of hardcoding /v1/chat/completions. Some self-hosted
LLM servers use different API paths.
Adds an optional `api_path` field to:
- Config (top-level and model_providers profile)
- ProviderRuntimeOptions
- OpenAiCompatibleProvider
When set, the custom path is appended to base_url instead of the
default /chat/completions suffix.
Closes#3125
* fix: add missing api_path field to test ModelProviderConfig initializers