* Sync jira tool description between .rs and en.toml
Replace multi-line operational guide in en.toml with the same one-liner
already in jira_tool.rs description(), matching the pattern used by all
other tools where both sources are in sync.
* Add myself action to jira tool for credential verification
* Add tests for myself action in jira tool
* Review and fix list_projects action added to jira tool
- Fix doc comment: update action count from four to five and add missing
myself entry
- Remove redundant statuses_url variable (was identical to url)
The list_projects action fetches all projects with their issue types,
statuses, and assignable users by combining /rest/api/3/project,
per-project /statuses, and /user/assignable/multiProjectSearch endpoints.
* Remove hardcoded project-specific statuses from shape_projects
Replace fixed known_order list (which included project-specific statuses
like 'Collecting Intel', 'Design', 'Verification') with a simple
alphabetical sort. Any Jira project can use arbitrary status names so
hardcoding an order is not applicable universally.
* Fix list_projects: bounded concurrency, error surfacing, and output shape
- Use tokio::task::JoinSet with STATUS_CONCURRENCY=5 to fetch per-project
statuses concurrently instead of sequentially, bounding API blast radius
- Surface user fetch errors: non-2xx and JSON parse failures now bail
instead of silently falling back to empty vec
- Surface per-project status JSON parse errors instead of swallowing them
with unwrap_or_else
- Move users to top-level output {projects, users} so they are not
duplicated across every project entry
* fix(tool): apply rustfmt formatting to jira_tool.rs
* feat(tunnel): add Pinggy tunnel support with configuration options
* feat(pinggy): update Pinggy tunnel configuration to remove domain field and improve SSH command handling
* feat(pinggy): add encryption and decryption for Pinggy tunnel token in config
* feat(pinggy): enhance region configuration for Pinggy tunnel with detailed options and validation
* feat(pinggy): enhance region validation and streamline output handling in Pinggy tunnel
* fix(pinggy): resolve clippy and fmt warnings
---------
Co-authored-by: moksh gupta <moksh.gupta@linktoany.com>
Add full rich media send/receive support using unified [TYPE:target] markers
(aligned with Telegram). Register QQ as a cron announcement delivery channel.
- Media upload with SHA256-based caching and TTL
- Attachment download to workspace with all types supported
- Voice: prefer voice_wav_url (WAV), inject QQ ASR transcription
- File uploads include file_name for proper display in QQ client
- msg_seq generation and reply rate-limit tracking
- QQ delivery instructions in system prompt
- Register QQ in cron scheduler and tool description
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: preserve session context across Slack messages when thread_replies=false
When thread_replies=false, inbound_thread_ts() was falling back to each
message's own ts, giving every message a unique conversation key and
breaking multi-turn context. Now top-level messages get thread_ts=None
when threading is disabled, so all messages from the same user in the
same channel share one session.
Closes#4052
* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
* fix: normalize 5-field cron weekday numbers to match standard crontab
The cron crate uses 1=Sun,2=Mon,...,7=Sat while standard crontab uses
0/7=Sun,1=Mon,...,6=Sat. This caused '1-5' to mean Sun-Thu instead of
Mon-Fri. Add a normalization step when converting 5-field expressions
to 6-field so user-facing semantics match standard crontab behavior.
Closes#4049
* chore: ignore RUSTSEC-2024-0384 (unmaintained instant crate via nostr)
* fix(provider): replace overall timeout with per-read timeout for Codex streams
The 120-second overall HTTP timeout was killing SSE streams mid-flight
when GPT-5.4 reasoning responses exceeded that duration. Replace with
a 300-second per-read timeout that only fires when no data arrives,
allowing long-running streams to complete while still detecting stalled
connections.
Closes#3786
* chore(deps): bump aws-lc-rs to fix RUSTSEC-2026-0044/0048
Update aws-lc-rs 1.16.1 → 1.16.2 (aws-lc-sys 0.38.0 → 0.39.0) to
resolve security advisory for X.509 Name Constraints Bypass.
`zeroclaw skills install https://clawhub.ai/owner/skill` previously
failed because is_git_source() treated all https:// URLs as git repos.
ClawHub is a web registry, not a git host.
- Add is_clawhub_source() to detect clawhub.ai URLs
- Add clawhub_slug() to extract the skill name from the URL path
- Add install_clawhub_skill_source() to download via the ClawHub
download API and extract the ZIP into the skills directory
- Exclude clawhub.ai URLs from git source detection
- Security audit runs on downloaded skills as with git installs
Closes#4022
Add the missing interrupt_on_new_message field to MatrixConfig and wire
it through InterruptOnNewMessageConfig so Matrix behaves consistently
with Telegram, Slack, Discord, and Mattermost.
Closes#4058
When a request exceeds a model's context window and there is no
conversation history to truncate (e.g. system prompt alone is too
large), bail immediately with an actionable error message instead of
wasting all retry attempts on the same unrecoverable error.
Previously, the retry loop would attempt truncation, find nothing to
drop (only system + one user message), then fall through to the normal
retry logic which classified context window errors as retryable. This
caused 3 identical failing API calls for a single "hello" message.
The fix adds an early exit in all three chat methods (chat_with_history,
chat_with_tools, chat) when truncate_for_context returns 0 dropped
messages, matching the existing behavior in chat_with_system.
Fixes#4044
* feat(config): add google workspace operation allowlists
* docs(superpowers): link google workspace operation inventory sources
* docs(superpowers): verify starter operation examples
* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test
- Remove the duplicated credentials_path, default_account, and audit_log
blocks that were copy-pasted into execute() — they were idempotent but
misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
match the trim applied during Config::validate(), preventing a mismatch
where a config entry with surrounding whitespace would pass validation but
never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
test to cover the duplicate-method validation path that was implemented but
untested.
* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented
- HIGH: extract and validate sub_resource before the allowlist check;
is_operation_allowed() now accepts Option<&str> for sub_resource and
returns false (fail-closed) when allowed_operations is non-empty and
a sub_resource is present — prevents nested gws calls such as
`drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured
* docs(google_workspace): document sub_resource limitation and add config-reference entries
Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
(sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1
Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
sub-table, sub-resource limitation note, and TOML example
* fix(docs): add classroom and events to allowed_services list in all config-reference locales
* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands
All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.
Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key
Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
dimensions including the optional sub_resource
Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes
Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
table; update Gmail examples to correct 4-segment shapes
Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape
* fix(google_workspace): add classroom and events to service parameter schema description
* fix(google_workspace): cross-validate allowed_operations service against allowed_services
When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.
Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.
Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
from a validation requirement to a runtime matching requirement — the validator
does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
when allowed_services is empty
* fix(google_workspace): cross-validate allowed_operations.service against effective service set
When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.
Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.
* fix(google_workspace): update test assertion for new error message wording
* docs(google_workspace): fix stale 3-segment gmail example in TDD plan
* fix(google_workspace): address adversarial review round 4 findings
- Error message for denied operations now includes sub_resource when
present, so gmail/users/messages/send and gmail/users/drafts/send
produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
construction time in new(). Runtime comparisons now use plain equality
instead of .trim() on every call, removing the latent defect where a
future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
and service/resource/method checks now both require lowercase alphanumeric
plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
(Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.
* fix(google_workspace): wire production helpers to close test coverage gaps
- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
same function the arg-ordering tests exercise, so a drift in the real
command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
method used by execute(). Replace the brittle page_limit_without_page_all
test (which relied on environment-specific execution failure wording)
with four direct assertions on build_pagination_args covering all
page_all/page_limit combinations.
* fix(whatsapp): remove duplicate variable declaration causing unused warning
Remove duplicate `let transcription_config = self.transcription.clone()`
(line 626 shadowed by identical line 628). The duplicate caused a
compiler warning during --features whatsapp-web builds.
Note: the reported "hang" at 526/528 crates is expected behavior for
release builds with lto="fat" + codegen-units=1 — the final link step
is slow but does complete.
Closes#4034
---------
Co-authored-by: Nim G <theredspoon@users.noreply.github.com>
* feat(config): add google workspace operation allowlists
* docs(superpowers): link google workspace operation inventory sources
* docs(superpowers): verify starter operation examples
* fix(google_workspace): remove duplicate credential/audit blocks, fix trim in allowlist check, add duplicate-methods test
- Remove the duplicated credentials_path, default_account, and audit_log
blocks that were copy-pasted into execute() — they were idempotent but
misleading and would double-append --account args on every call.
- Trim stored service/resource/method values in is_operation_allowed() to
match the trim applied during Config::validate(), preventing a mismatch
where a config entry with surrounding whitespace would pass validation but
never match at runtime.
- Add google_workspace_allowed_operations_reject_duplicate_methods_within_entry
test to cover the duplicate-method validation path that was implemented but
untested.
* fix(google_workspace): close sub_resource bypass, trim allowed_services at runtime, mark spec implemented
- HIGH: extract and validate sub_resource before the allowlist check;
is_operation_allowed() now accepts Option<&str> for sub_resource and
returns false (fail-closed) when allowed_operations is non-empty and
a sub_resource is present — prevents nested gws calls such as
`drive/files/permissions/list` from slipping past a 3-segment policy
- MEDIUM: runtime allowed_services check now uses s.trim() == service,
matching the trim() applied during config validation
- LOW: spec status updated to Implemented; stale "does not currently
support method-level allowlists" line removed
- Added test: operation_allowlist_rejects_sub_resource_when_operations_configured
* docs(google_workspace): document sub_resource limitation and add config-reference entries
Spec updates (superpowers/specs):
- Semantics section: note that sub_resource calls are denied fail-closed when
allowed_operations is configured
- Mental model: show both 3-segment and 4-segment gws command shapes; explain
that 4-segment commands are unsupported with allowed_operations in this version
- Runtime enforcement: correct the validation order to match the implementation
(sub_resource extracted before allowlist check, budget charged last)
- New section: Sub-Resource Limitation — documents impact, operator workaround,
and confirms the deny is intentional for this slice
- Follow-On Work: add sub_resource config model extension as item 1
Config reference updates (all three locales):
- Add [google_workspace] section with top-level keys, [[allowed_operations]]
sub-table, sub-resource limitation note, and TOML example
* fix(docs): add classroom and events to allowed_services list in all config-reference locales
* feat(google_workspace): extend allowed_operations to support sub_resource for 4-segment gws commands
All Gmail operations use gws gmail users <sub_resource> <method>, not the flat
3-segment shape. Without sub_resource support in allowed_operations, Gmail could
not be scoped at all, making the email-assistant use case impossible.
Config model:
- Add optional sub_resource field to GoogleWorkspaceAllowedOperation
- An entry without sub_resource matches 3-segment calls (Drive, Calendar, etc.)
- An entry with sub_resource matches only calls with that exact sub_resource value
- Duplicate detection updated to (service, resource, sub_resource) key
Runtime:
- Remove blanket sub_resource deny; is_operation_allowed now matches on all four
dimensions including the optional sub_resource
Tests:
- Add operation_allowlist_matches_gmail_sub_resource_shape
- Add operation_allowlist_matches_drive_3_segment_shape
- Add rejects_operation_with_unlisted_sub_resource
- Add google_workspace_allowed_operations_allow_same_resource_different_sub_resource
- Add google_workspace_allowed_operations_reject_invalid_sub_resource_characters
- Add google_workspace_allowed_operations_deserialize_without_sub_resource
- Update all existing tests to use correct gws command shapes
Docs:
- Spec: correct Gmail examples throughout; remove Sub-Resource Limitation section;
update data model, validation rules, example use case, and follow-on work
- Config-reference (en, vi, zh-CN): add sub_resource field to allowed_operations
table; update Gmail examples to correct 4-segment shapes
Platform:
- email-assistant SKILL.md: update allowed_operations paths to gmail/users/* shape
* fix(google_workspace): add classroom and events to service parameter schema description
* fix(google_workspace): cross-validate allowed_operations service against allowed_services
When allowed_services is explicitly configured, each allowed_operations entry's
service must appear in that list. An entry that can never match at runtime is a
misconfigured policy: it looks valid but silently produces a narrower scope than
the operator intended. Validation now rejects it with a clear error message.
Scope: only applies when allowed_services is non-empty. When it is empty, the tool
uses a built-in default list defined in the tool layer; the validator cannot
enumerate that list without duplicating the constant, so the cross-check is skipped.
Also:
- Update allowed_operations field doc-comment from 3-part (service, resource, method)
to 4-part (service, resource, sub_resource, method) model
- Soften Gmail sub_resource "required" language in config-reference (en, vi, zh-CN)
from a validation requirement to a runtime matching requirement — the validator
does not and should not hardcode API shape knowledge for individual services
- Add tests: rejects operation service not in allowed_services; skips cross-check
when allowed_services is empty
* fix(google_workspace): cross-validate allowed_operations.service against effective service set
When allowed_services is empty the validator was silently skipping the
service cross-check, allowing impossible configs like an unlisted service
in allowed_operations to pass validation and only fail at runtime.
Move DEFAULT_GWS_SERVICES from the tool layer (google_workspace.rs) into
schema.rs so the validator can use it unconditionally. When allowed_services
is explicitly set, validate against that set; when empty, fall back to
DEFAULT_GWS_SERVICES. Remove the now-incorrect "skips cross-check when empty"
test and add two replacement tests: one confirming a valid default service
passes, one confirming an unknown service is rejected even with empty
allowed_services.
* fix(google_workspace): update test assertion for new error message wording
* docs(google_workspace): fix stale 3-segment gmail example in TDD plan
* fix(google_workspace): address adversarial review round 4 findings
- Error message for denied operations now includes sub_resource when
present, so gmail/users/messages/send and gmail/users/drafts/send
produce distinct, debuggable errors.
- Audit log now records sub_resource, completing the trail for 4-segment
Gmail operations.
- Normalize (trim) allowed_services and allowed_operations fields at
construction time in new(). Runtime comparisons now use plain equality
instead of .trim() on every call, removing the latent defect where a
future code path could forget to trim and silently fail to match.
- Unify runtime character validation with schema validation: sub_resource
and service/resource/method checks now both require lowercase alphanumeric
plus underscore and hyphen, matching the validator's character set.
- Add positional_cmd_args() test helper and tests verifying 3-segment
(Drive) and 4-segment (Gmail) argument ordering.
- Add test confirming page_limit without page_all passes validation.
- Add test confirming whitespace in config values is normalized at
construction, not deferred to comparison time.
- Fix spec Runtime Enforcement section to reflect actual code order.
* fix(google_workspace): wire production helpers to close test coverage gaps
- Remove #[cfg(test)] from positional_cmd_args; execute() now calls the
same function the arg-ordering tests exercise, so a drift in the real
command-building path is caught by the existing tests.
- Extract build_pagination_args(page_all, page_limit) as a production
method used by execute(). Replace the brittle page_limit_without_page_all
test (which relied on environment-specific execution failure wording)
with four direct assertions on build_pagination_args covering all
page_all/page_limit combinations.
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Add `ZEROCLAW_GATEWAY_TIMEOUT_SECS` environment variable to override the
hardcoded 30-second gateway request timeout at runtime.
Agentic workloads with tool use (web search, MCP tools, sub-agent
delegation) regularly exceed 30 seconds, causing HTTP 408 timeouts at the
gateway layer — even though provider_timeout_secs allows longer LLM calls.
The default remains 30s for backward compatibility.
Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
Setup tokens (sk-ant-oat01-*) from Claude Pro/Max subscriptions require
specific headers and a system prompt to authenticate successfully.
Without these, the API returns 400 Bad Request.
Changes to apply_auth():
- Add claude-code-20250219 and interleaved-thinking-2025-05-14 beta
headers alongside existing oauth-2025-04-20
- Add anthropic-dangerous-direct-browser-access: true header
New apply_oauth_system_prompt() method:
- Prepends required "You are Claude Code" identity to system prompt
- Handles String, Blocks, and None system prompt variants
Changes to chat_with_system() and chat():
- Inject OAuth system prompt when using setup tokens
- Use NativeChatRequest/NativeChatResponse for proper SystemPrompt
enum support in chat_with_system
Test updates:
- Updated apply_auth test to verify new beta headers and
browser-access header
Tested with real OAuth token via `zeroclaw agent -m` — confirmed
working end-to-end.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
tool_search activates deferred MCP tools into ActivatedToolSet at runtime.
When tool_search runs in parallel with the tools it activates, a race
condition occurs where tool lookups happen before activation completes,
resulting in "Unknown tool" errors.
Force sequential execution in should_execute_tools_in_parallel() whenever
tool_search is present in the tool call batch.
Co-authored-by: Claude Code (claude-opus-4-6) <noreply@anthropic.com>
* feat(tools): add text browser tool for headless environments (#3879)
* fix(tools): remove redundant match arm in text_browser clippy lint
* ci: trigger fresh workflow run
In Compact (MetadataOnly) mode, skill tools were omitted from the system
prompt alongside instructions. This meant the LLM had no visibility into
TOML-defined tools when running in Compact mode, defeating the primary
advantage of TOML skills over MD skills (structured tool metadata).
Now Compact mode skips only instructions (loaded on demand via
read_skill) while still inlining tool definitions so the LLM knows
which skill tools are available.
Closes#3702
* feat: add Bailian (Aliyun) provider support
- Add bailian provider with coding.dashscope.aliyuncs.com endpoint
- Set User-Agent to 'openclaw' for compatibility with Bailian Coding Plan
- Support aliases: bailian, aliyun-bailian, aliyun
- Enable vision support for multimodal models
This allows users to use Alibaba Cloud's Bailian (百炼) Coding Plan API keys
with zeroclaw, matching the configuration used in OpenClaw.
* fix: address PR review comments for bailian provider
- Use is_bailian_alias() in factory match arm (consistency with other providers)
- Add bailian to canonical_china_provider_name() for onboarding routing
- Add BAILIAN_API_KEY env var resolution with DASHSCOPE_API_KEY fallback
- Add test coverage for bailian aliases in canonical_china_provider_name test
* fix: format bailian provider code
---------
Co-authored-by: Sagit-chu <sagit@example.com>
* fix(copilot): support vision via multi-part content messages
The Copilot provider sent image markers as plain text in the content
field. The API expects OpenAI-style multi-part content arrays with
separate text and image_url parts for vision input.
Add ApiContent enum (untagged) that serializes as either a plain string
or an array of ContentPart objects. User messages containing [IMAGE:]
markers are converted to multi-part content using the shared
multimodal::parse_image_markers() helper, matching the pattern used by
the OpenRouter and Compatible providers. Non-user messages and messages
without images serialize as plain strings.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(copilot): route chat_with_system through multimodal parser and add tests
Fix chat_with_system user message bypassing to_api_content(), which left
[IMAGE:] markers as plain text on that code path. Add unit tests for
to_api_content() covering image-marker, plain-text, and non-user roles.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* style: fix clippy wildcard match warning in copilot test
* style: apply rustfmt formatting
* style: fix formatting in channels/mod.rs
* ci: re-trigger CI
* fix(test): update conversation history key in new-session test
The conversation_history_key function now includes reply_target in the
key format. Update test assertions to use the correct key format
(telegram_chat-refresh_alice instead of telegram_alice).
---------
Co-authored-by: Tim Stewart <timstewartj@gmail.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix(ollama): default to prompt-guided tool calling for local models
Ollama's /api/chat native tool-calling parameter is silently ignored by
many local models (llama3, qwen3, phi4, etc.), causing them to emit
tool-call JSON as plain text instead of structured tool calls. When
native_tool_calling was true, the XML tool-use instructions were
suppressed from the system prompt, leaving models with no guidance on
the expected tool protocol.
Default to prompt-guided (XML) tool calling so all Ollama models receive
tool-use instructions in the system prompt and the existing text-based
parser in the agent loop can extract tool calls reliably.
Also fixes a minor rustfmt issue in channels/mod.rs from #2891.
Fixes#3999Fixes#3982
* fix(channels): update test keys for reply_target in history key
The conversation_history_key now includes reply_target (from #2891),
but the refreshes_available_skills_after_new_session test still used
the old key format "telegram_alice" instead of
"telegram_chat-refresh_alice".
conversation_history_key() now includes reply_target to isolate
conversation histories across distinct Discord/Slack/Mattermost
channels for the same sender. Previously all channels produced
the same key {channel}_{sender}, causing cross-channel context bleed.
New key format: {channel}_{reply_target}_{sender} (without thread)
or {channel}_{reply_target}_{thread_ts}_{sender} (with thread).
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Swap auto_save and load_context order to avoid retrieving the just-stored
user message in memory recall.
Before: load_context → auto_save (could retrieve own message)
After: auto_save → load_context (query first, store after)
This fixes the issue where "[Memory context]\n- user_msg: <current>\n<current>"
would appear, causing the user's input to be duplicated.
Add tool descriptions alongside names in the deferred tools section so
the LLM can better identify which tool to activate via tool_search.
Original author: @mark-linyb
Add MiniMax-M2.7 and M2.7-highspeed to model selection lists. Set M2.7 as the new default for MiniMax, Novita, and Ollama cloud providers. Retain all previous models (M2.5, M2.1, M2) as alternatives.
The conditional cfg branches for AtomicU32 (32-bit fallback) and
AtomicU64 (64-bit) became dead code after portable_atomic::AtomicU64
was adopted in bd757996. The AtomicU32 branch would fail to compile
on 32-bit targets because the import was removed but the usage remained.
Use portable_atomic::AtomicU64 unconditionally, which works on all
targets.
Fixes#3452
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(packaging): ensure Homebrew var directory exists on first start
When zeroclaw is installed via Homebrew, the service plist references
/opt/homebrew/var/zeroclaw (or /usr/local/var/zeroclaw on Intel) for
runtime data, but this directory was never created. This caused
`brew services start zeroclaw` to fail on first use.
The fix detects Homebrew installations by checking if the binary lives
under a Homebrew prefix (Cellar path or symlinked bin/ with Cellar
sibling). When detected:
- install_macos() creates the var directory and sets
ZEROCLAW_CONFIG_DIR + WorkingDirectory in the generated plist
- start() defensively ensures the var directory exists before
invoking launchctl
Closes#3464
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* style(service): fix cargo fmt formatting in Homebrew var dir tests
Collapse multi-line assert_eq! macros to single-line form as
required by cargo fmt.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: SpaceLobster <spacelobster@SpaceLobsters-Mac-mini.local>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add proper SAFETY documentation comments to the two unsafe blocks using
libc::getuid() in src/service/mod.rs:
1. In is_root() function: documents why getuid() is safe to call
2. In is_root_matches_system_uid() test: documents the test's purpose
This addresses the best practices audit finding about unsafe code without
safety comments. While we could use the nix crate's safe wrapper, adding
safety comments is a minimal change that satisfies the audit requirement
without introducing new dependencies.
As noted in refactor-candidates.md, the nix crate alternative would also
be acceptable, but this change follows the principle of minimal intervention
for documentation-only improvements.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude <noreply@anthropic.com>
When HOME environment variable is not set (common in cron jobs or
other non-TTY environments), `shellexpand::tilde` returns the literal
`~` unexpanded. This caused SOUL.md, IDENTITY.md, and other workspace
files to not be loaded because the workspace_dir path was invalid.
This fix adds `expand_tilde_path()` helper that falls back to
`directories::UserDirs` when shellexpand fails to expand tilde.
If both methods fail, a warning is logged advising users to use
absolute paths or set HOME explicitly in cron environments.
Closes#3819
Venice's API accepts the OpenAI-compatible tool format without error
but models ignore the tool specs and hallucinate tool usage in prose
instead. Disable native tool calling so Venice uses prompt-guided
tools, which work reliably.
Add without_native_tools() builder method to OpenAiCompatibleProvider
for providers that support system messages but not native function
calling.
Closes#4007
The /new command only cleared in-memory conversation history but left
the JSONL session file on disk. On daemon restart, stale history was
rehydrated, negating the user's session reset. This caused context
pollution and degraded tool calling reliability.
Add delete_session() to SessionStore and call it from the /new handler
so both in-memory and persisted state are cleared.
Closes#4009
Add interruption_scope_id to ChannelMessage for thread-aware cancellation. Slack genuine thread replies and Matrix threads get scoped keys, preventing cross-thread cancellation. All other channels preserve existing behavior.
Supersedes #3900. Depends on #3891.
* feat(channel): add /stop command to cancel in-flight tasks
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.
Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
semaphore acquisition so the target task is never replaced in the store;
fires CancellationToken on the running task; sends reply via tokio::spawn
using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
now enter the in_flight_by_sender store, enabling /stop to reach them;
auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
Slack only) — this is a deliberate broadening, not a side effect
Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
semantics analysis; Slack always sets thread_ts, Matrix only for replies)
Supersedes #2855
Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
* feat(channel): add interrupt_on_new_message support for Discord
---------
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
* feat(delegate): make delegate timeout configurable via config.toml
Add configurable timeout options for delegate tool:
- timeout_secs: for non-agentic sub-agent calls (default: 120s)
- agentic_timeout_secs: for agentic sub-agent runs (default: 300s)
Previously these were hardcoded constants (DELEGATE_TIMEOUT_SECS
and DELEGATE_AGENTIC_TIMEOUT_SECS). Users can now customize them
in config.toml under [[delegate.agents]] section.
Fixes#3898
* feat(config): make delegate tool timeouts configurable via config.toml
This change makes the hardcoded 120s/300s delegate tool timeouts
configurable through the config file:
- Add [delegate] section to Config with timeout_secs and agentic_timeout_secs
- Add DelegateToolConfig struct for global default timeout values
- Add DEFAULT_DELEGATE_TIMEOUT_SECS (120) and DEFAULT_DELEGATE_AGENTIC_TIMEOUT_SECS (300) constants
- Remove hardcoded constants from delegate.rs
- Update tests to use constant values instead of magic numbers
- Update examples/config.example.toml with documentation
Closes#3898
* fix: keep delegate timeout fields as Option<u64> with global fallback
- Change DelegateAgentConfig.timeout_secs and agentic_timeout_secs from
u64 to Option<u64> so per-agent overrides are truly optional
- Implement manual Default for DelegateToolConfig with correct values
(120s and 300s) instead of derive(Default)
- Add DelegateToolConfig to DelegateTool struct and wire through
constructors so agent timeouts fall back to global [delegate] config
- Add validation for delegate timeout values in Config::validate()
- Fix example config to use [agents.name] table syntax matching the
HashMap<String, DelegateAgentConfig> schema
- Add missing timeout fields to all DelegateAgentConfig struct literals
across codebase (doctor, swarm, model_routing_config, tools/mod)
* chore: trigger CI
* chore: retrigger CI
* fix: cargo fmt line wrapping in config/mod.rs
* fix: import timeout constants in delegate tests
* style: cargo fmt
---------
Co-authored-by: vincent067 <vincent067@outlook.com>
Adds an explicit /stop slash command that allows users on any non-CLI
channel (Matrix, Telegram, Discord, Slack, etc.) to cancel an agent
task that is currently running.
Changes:
- is_stop_command(): new helper that detects /stop (case-insensitive,
optional @botname suffix), not gated on channel type
- /stop fast path in run_message_dispatch_loop: intercepts /stop before
semaphore acquisition so the target task is never replaced in the store;
fires CancellationToken on the running task; sends reply via tokio::spawn
using the established two-step channel lookup pattern
- register_in_flight separated from interrupt_enabled: all non-CLI tasks
now enter the in_flight_by_sender store, enabling /stop to reach them;
auto-cancel-on-new-message remains gated on interrupt_enabled (Telegram/
Slack only) — this is a deliberate broadening, not a side effect
Deferred to follow-up (feat/matrix-interrupt-on-new-message):
- interrupt_on_new_message config field for Matrix
- thread-aware interruption_scope_key (requires per-channel thread_ts
semantics analysis; Slack always sets thread_ts, Matrix only for replies)
Supersedes #2855
Tests: 7 new unit tests for is_stop_command; all 4075 tests pass.
* feat: add Jira tool with get_ticket, search_tickets, and comment_ticket
Implements a new `jira` tool following the existing zeroclaw tool
conventions (Tool trait, SecurityPolicy, config-gated registration).
- get_ticket: configurable detail level (basic/basic_search/full/changelog)
with response shaping; always in the default allowed_actions list
- search_tickets: JQL-based search with cursor pagination (nextPageToken);
always returns basic_search shape; gated by allowed_actions
- comment_ticket: posts ADF comments with inline markdown-like syntax —
@email mentions resolved to Jira accountId, **bold**, bullet lists,
newlines; gated by allowed_actions and SecurityPolicy Act operation
Config: [jira] section with base_url, email, api_token (encrypted at
rest, falls back to JIRA_API_TOKEN env var), allowed_actions (default:
["get_ticket"]), and timeout_secs. Validated on load.
Tool description in tool_descriptions/en.toml documents all three
actions and the full comment syntax for the AI system prompt.
* fix: address jira tool code review findings
High priority:
- Validate issue_key against ^[A-Z][A-Z0-9]+-\d+$ before URL interpolation
to prevent path traversal in get_ticket and comment_ticket
Medium priority:
- Add email guard in tool registration (mod.rs) to skip with a warning
instead of registering a broken tool when jira.email is empty
- Shape comment_ticket response to return only id, author, created —
avoids exposing internal Jira metadata to the AI
- Replace O(n²) comment matching in shape_basic with a HashMap lookup
keyed by comment ID for O(1) access
- Add api_token validation in Config::validate() checking both the
config field and JIRA_API_TOKEN env var when jira.enabled = true
Low priority:
- Make shape_basic_search private (was accidentally pub)
- Extend clean_email to strip leading punctuation (( and [) so that
@(john@co.com) resolves correctly; fix suffix computation via pointer
arithmetic to handle the shifted offset
- Clarify tool_descriptions/en.toml: @prefix is required for mentions,
bare emails without @ are treated as plain text
- Handle unmatched ** in parse_inline: emit as literal text instead of
silently producing a bold node with no closing marker
* fix(jira): allow lowercase project keys in issue_key validation
Relax validate_issue_key to accept both PROJ-123 and proj-123, since
some Jira instances use lowercase custom project keys. Path traversal
protection is preserved via alphanumeric + digit-number requirement.
* feat(tools): add tool honesty instructions to system prompt
Prevent AI from fabricating tool results by injecting a CRITICAL:
Tool Honesty section into both channel and CLI/agent system prompts.
Rules: never fabricate or guess tool results, report errors as-is,
and ask the user when unsure if a tool call succeeded.
* style: sort JiraConfig import alphabetically in config/mod.rs
* style(jira): fix strict clippy lints in jira_tool
- Derive Default for LevelOfDetails instead of manual impl
- Use char arrays in trim_start_matches/trim_end_matches
- Allow cast_possible_truncation on search_tickets (usize->u32 bounded by max_results)
- Remove needless borrow on &email
* fix(ci): adapt to upstream autonomy_level additions in channels/mod.rs
- Add missing autonomy_level argument to build_system_prompt_with_mode call in test
- Add missing autonomy_level field in ChannelRuntimeContext test initializer
- Allow large_futures in load_or_init test (Config struct growth from JiraConfig)
* fix(ci): resolve duplicate and missing autonomy_level in test initializers
* fix(ci): use TelegramRecordingChannel in telegram-specific test
The test process_channel_message_executes_tool_calls_instead_of_sending_raw_json
sent messages on channel "telegram" but registered RecordingChannel (name:
"test-channel"), causing the channel lookup to return None and no messages to
be sent.
* fix(jira): prevent panics on short dates, fix dedup bug, normalize base_url
- Add date_prefix() helper to safely slice date strings instead of
panicking on empty or short strings from the Jira API.
- Replace Vec::dedup() with HashSet-based retain in extract_emails()
to correctly deduplicate non-adjacent duplicates.
- Strip trailing slashes from base_url during construction to prevent
double-slash URLs.
- Add tests for date_prefix and non-adjacent email dedup.
---------
Co-authored-by: Anatolii <anatolii@Anatoliis-MacBook.local>
Co-authored-by: Anatolii <anatolii.fesiuk@gmail.com>
Preserve assistant text from native tool-call turns in draft updates. Falls back to response_text when parsed_text is empty and native tool calls are present. Relays text through on_delta for draft-capable channels like Telegram.
Supersedes #3976. Closes#3974
Inject a human-readable summary of the active SecurityPolicy into the system prompt Safety section. LLM sees allowed commands, forbidden paths, autonomy level, and rate limits.
Supersedes #3968. Closes#2404
Adds text-to-speech output to the Telegram channel, mirroring the
existing WhatsApp Web voice-chat implementation. When a user sends a
voice note (transcribed via STT), the channel enters voice-chat mode
and subsequent agent replies are synthesised into a Telegram voice note
via the configured TTS provider, in addition to the normal text reply.
Sending a text message exits voice-chat mode.
Implementation details:
- Add `tts_config`, `voice_chats`, and `pending_voice` fields to
`TelegramChannel`
- Add `with_tts()` builder method, gated on `config.enabled`
- Track voice-chat state: enter on successful STT transcription, exit
on incoming text message
- `synthesize_and_send_voice()` static method: synthesises audio via
`TtsManager` and uploads to Telegram using `sendVoice` multipart API
- 10-second debounce in `send()` ensures only the final substantive
reply in a tool chain gets a voice note (skips JSON, code blocks,
URLs, short status messages)
- Wire `.with_tts(config.tts.clone())` into both Telegram construction
sites in the channel factory
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Replace relative docs/assets/zeroclaw-banner.png paths with absolute
raw.githubusercontent.com URLs in all 31 README files so the banner
renders correctly regardless of where the README is viewed
- Switch web dashboard favicon and logos from logo.png to zeroclaw-trans.png
- Add zeroclaw-trans.png and zeroclaw-banner.png assets
- Update build.rs to track new dashboard asset
- Fix missing autonomy_level in new test + Box::pin large future
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
build_system_prompt_with_mode was discarding the autonomy_level
parameter, passing None to build_system_prompt_with_mode_and_autonomy.
This caused full-autonomy prompts to still include "ask before acting"
instructions. Convert the level to an AutonomyConfig and pass it through.
The refresh-skills test was missing the autonomy_level parameter
added to build_system_prompt_with_mode and ChannelRuntimeContext
by a recently merged PR.