Compare commits

...

297 Commits

Author SHA1 Message Date
Argenis 06f65fb711 feat(release): bump to v0.2.0 with auto-tweet and contributor notes (#3475)
* feat(release): bump to v0.2.0, auto-tweet with features + contributors

- Bump version from 0.1.9 to 0.2.0
- Release notes now auto-generate from feat() commits only (no bug
  fixes) keeping them clean and concise
- Contributors are always featured in release notes, pulled from
  git log authors and Co-Authored-By trailers
- Tweet workflow now fires on all releases (beta + stable), auto-
  generates punchy feature-focused tweets with contributor shoutouts
- Stable release workflow gets the same release notes treatment
- Tweet gracefully skips if Twitter secrets aren't configured

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(release): credit all contributors, wider range, filter bots

- Use wider git range for contributor collection — skips same-version
  betas to capture everyone who contributed to the release, not just
  the last incremental push
- Filter out bot accounts (dependabot, github-actions, copilot, etc.)
  from contributor lists
- Tweet shows up to 6 contributors with "+ N more" when there are
  extras, ensuring everyone gets credit
- Deduplicate contributors across git authors and Co-Authored-By
- Deduplicate features with sort -uf for cleaner notes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(release): use last stable tag for contributor range, not last beta

Ensures the tweet and release notes capture ALL contributors since the
last stable release (e.g. v0.1.9a), not just since the last beta tag.
This gives proper credit to everyone who contributed across the full
release cycle.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 07:59:04 -04:00
Argenis 46d4b13c22 feat(install): branded one-click installer with secure pairing flow (#3471)
* feat(install): consolidate one-click installer with branded output and inline onboarding

- Add blue color scheme with 🦀 crab emoji branding throughout installer
- Add structured [1/3] [2/3] [3/3] step output with ✓/·/✗ indicators
- Consolidate onboarding into install.sh: inline provider selection menu,
  API key prompt, and model override — no separate wizard step needed
- Replace --onboard/--interactive-onboard with --skip-onboard (opt-out)
- Add OS detection display, install method, version detection, upgrade vs
  fresh install logic
- Add post-install gateway service install/restart, doctor health check
- Add dashboard URL (port 42617) with clipboard copy and browser auto-open
- Add docs link (https://www.zeroclawlabs.ai/docs) to success output
- Display pairing code after onboarding in Rust CLI (src/main.rs)
- Remove --interactive flag from `zeroclaw onboard` CLI command
- Remove redundant scripts/install-release.sh legacy redirect
- Update all --interactive references across codebase

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(onboard): auto-pair and include bearer token in dashboard URL

After onboarding, the CLI now auto-pairs using the generated pairing
code to produce a bearer token, then displays the dashboard URL with
the token embedded (e.g. http://127.0.0.1:42617?token=zc_...) so
users can access the dashboard immediately without a separate pairing
step. The token is also persisted to config for gateway restarts.

The install script captures this token-bearing URL from the onboard
output and uses it for clipboard copy and browser auto-open.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* security(onboard): revert token-in-URL, keep pairing code terminal-only

Removes the auto-pair + token-in-URL approach in favor of the original
secure pairing flow. Bearer tokens should never appear in URLs where
they can leak via browser history, Referer headers, clipboard, or
proxy logs. The pairing code stays in the terminal and the user enters
it in the dashboard to complete the handshake securely.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: apply cargo fmt formatting

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 07:33:14 -04:00
Jacobinwwey 8fcbb6eb2d fix(channels): harden slack threading and utf8 truncation (#3461)
* fix(channels): harden slack threading and utf8 truncation

* refactor(channel): collapse interrupt flags to satisfy clippy

---------

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-14 07:31:10 -04:00
Argenis ce22eba7d0 fix(channels): proactively trim conversation history before provider call (#3473)
Conversation history on long-running channel sessions (e.g. Feishu) grew
unbounded until the provider returned a context-window-exceeded error.
The existing reactive compaction only kicked in *after* the error,
causing the user's message to be lost and requiring a resend.

Add proactive_trim_turns() which estimates total character count and
drops the oldest turns before the request reaches the provider.  The
budget (400 k chars ≈ 100 k tokens) leaves headroom for system prompt,
memory context, and model output.

Closes #3460
2026-03-14 07:15:34 -04:00
Argenis 7ba4d06e78 fix(docs): use absolute URL for install.sh One-Click Setup link (#3470)
The relative href="install.sh" in README nav headers resolves to
zeroclawlabs.ai/install.sh when viewed on the website, which returns
404 since the website does not serve repo-root files. Replace with
the raw.githubusercontent.com URL used elsewhere in the docs.

Fixes #3463
2026-03-14 07:14:47 -04:00
lilstaz dc12d03876 feat(gateway): enable multi-turn chat for WebSocket connections (#3467)
Replace single-turn chat with persistent Agent to maintain conversation
history across WebSocket turns within the same connection.

Co-authored-by: staz <starzwan2333@gmail.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-14 07:04:59 -04:00
Argenis 3151604b04 fix(ci): include install.sh in release assets and website dispatch (#3469)
The website at zeroclawlabs.ai shows a copy button with
curl -fsSL https://zeroclawlabs.ai/install.sh | bash but the website
does not serve the script, returning 404.

Add install.sh as a GitHub release asset in both beta and stable
workflows so it is always available at a stable URL. Pass the raw
GitHub URL in the website redeploy dispatch payload so the website
can configure a redirect or proxy for /install.sh.

Closes #3463
2026-03-14 07:04:17 -04:00
guitaripod c5fcda06ad fix(agent): add channel media markers to system prompt (#3459)
The system prompt has no documentation of channel media markers
([Voice], [IMAGE:], [Document:]), causing the LLM to misinterpret
transcribed voice messages as unprocessable audio attachments instead
of responding to the transcribed text content.

Re-applies fix from merged dev PR #1697 which was lost during the
master branch migration.

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-14 06:58:40 -04:00
Ericsunsk 51a52dcadb fix(memory): pass embedding_routes in gateway and agent loop (#3462)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-14 06:56:55 -04:00
argenis de la rosa dd9e26eac6 ci: trigger website redeploy after release publish
After publishing a beta or stable release, dispatch a
repository_dispatch event to zeroclaw-website so it rebuilds
with the latest version tag automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 06:08:09 -04:00
argenis de la rosa c4b2a21c61 ci: upgrade tweet-release with OpenClaw-style formatting, image support, and manual dispatch 2026-03-14 05:01:56 -04:00
argenis de la rosa d6170ab49b ci: add tweet-release workflow to post to X on stable releases 2026-03-14 04:59:18 -04:00
Argenis 399c896c3b chore: bump release notes to v0.1.9b (#3455)
Update What's New and Recent Contributors sections to reference
v0.1.9b release.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 22:57:11 -04:00
Argenis 71d32c3b04 docs(readme): add v0.1.9a release highlights and contributor credits (#3453)
* docs(readme): add v0.1.9a release highlights and contributor credits

Add "What's New in v0.1.9a" section covering web dashboard restyle,
new providers/channels, MCP tools, infrastructure updates, and fixes.
Add "Recent Contributors" table crediting key contributors with their
specific highlights for this release cycle.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(docs): use table format for release highlights to pass MD036 lint

Replace bold-text subsections with a table to satisfy the markdown
linter's no-emphasis-as-heading rule (MD036).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 22:41:11 -04:00
Christopher Wong 27936b051d Windows/wizard deadlock (#3451)
* ci: add x86_64-pc-windows-msvc to build matrix

* fix: prevent test deadlock in ensure_onboard_overwrite_allowed

Gate non-interactive terminal check behind cfg!(not(test)) so tests with
force=false do not hang waiting on stdin. cfg!(test) path bails immediately
with a clear message. No changes to extra_headers, mcp, nodes, or shellexpand.

---------

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 21:59:05 -04:00
Alix-007 39d788a95f fix(linq): accept latest webhook payload shape (#3351)
* fix(linq): accept current webhook payload shape

* style(linq): satisfy clippy lifetime lint

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 21:53:41 -04:00
Alix-007 5d921bd37d fix(config): decrypt feishu channel secrets on load (#3355)
* fix(config): decrypt feishu channel secrets on load

* style(config): format feishu secret assertions

---------

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 21:53:11 -04:00
Alix-007 d17f0a946c fix(config): recover docker runtime path on save (#3354)
* fix(config): recover docker runtime path on save

* style(config): align docker save path formatting

---------

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 21:52:40 -04:00
Alix-007 2710ce65cc fix(cli): handle empty invocation before clap parse (#3353) 2026-03-13 21:52:15 -04:00
Alix-007 51e8fc8423 fix(install): restore legacy release installer path (#3352) 2026-03-13 21:51:56 -04:00
Alix-007 ecf9d477bd fix(docs): use master install script URL (#3349)
Co-authored-by: Alix-007 <Alix-007@users.noreply.github.com>
2026-03-13 21:51:34 -04:00
Christopher Wong 625784c25f ci: add x86_64-pc-windows-msvc to build matrix (#3449)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 21:45:08 -04:00
Asuta 348c0c37b7 feat(agent): 支持交互会话状态持久化与恢复 (#3421)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 18:55:42 -04:00
zq 11a1dae55b docs(i18n/zh-CN): Complete full Chinese documentation translation and… (#3429)
* docs(i18n/zh-CN): Complete full Chinese documentation translation and i18n migration

  - Translate 58 core Chinese docs covering all modules (setup, reference, operations, security, hardware,
  contributor guides, etc.)
  - Migrate all .zh-CN.md files to i18n/zh-CN directory preserving original directory structure
  - Update internal links across 3 entry files:
    * Root README.zh-CN.md
    * docs/README.zh-CN.md
    * docs/SUMMARY.zh-CN.md
  - All links point to correct i18n paths with full accessibility
  - Align with Vietnamese i18n directory structure per project conventions

* fix(i18n/zh-CN): resolve all 30 blocking markdown lint errors
- Fix all MD022 blank lines around headings issues across 10 files
- Fix MD036 emphasis used as heading issue in refactor-candidates.md

* docs(i18n/zh-CN): resolve broken document reference link in root README.zh-CN.md

* fix(docs): repair double-quote HTML bug and broken relative links in zh-CN docs

- Remove stray extra `"` after href closing quotes in 6 HTML links in
  root README.zh-CN.md
- Fix relative link depth for files under docs/i18n/zh-CN/ that
  reference repo-root files (CONTRIBUTING.md, README.zh-CN.md,
  .github/workflows/, tests/, docs/SUMMARY.zh-CN.md): use correct
  number of `../` levels based on actual file location in the tree

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 18:46:29 -04:00
Argenis dd1681be44 docs(i18n): add documentation hub translations for all 30 languages (#3450)
Add README and SUMMARY translations for 25 missing languages to match
the root-level README coverage. Update English docs hub and SUMMARY
with links to all localized hubs.

New languages: ar, bn, cs, da, de, el, es, fi, he, hi, hu, id, it,
ko, nb, nl, pl, pt, ro, sv, th, tl, tr, uk, ur. Also adds missing
SUMMARY.vi.md for Vietnamese.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 18:42:26 -04:00
Fausto cabd3de3cb Allow ZEROCLAW_PROVIDER_URL env variable to override api_url (#3414)
* Ignore JetBrains .idea folder

* fix(ollama): support stringified JSON tool call arguments

* providers: allow ZEROCLAW_PROVIDER_URL env var to override Ollama base URL

Supports container deployments where Ollama runs on a Docker network host
(e.g. http://ollama:11434) without requiring config.toml changes.

Includes regression test ensuring the environment override works.

* fix(clippy): replace Default::default() with ProviderRuntimeOptions::default()

---------

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 18:24:37 -04:00
Argenis 87cf6b0e93 feat(gateway): add dynamic node discovery and capability advertisement (#3448)
Add a WebSocket endpoint at /ws/nodes where external processes and
devices can connect and advertise their capabilities at runtime.
The gateway tracks connected nodes in a NodeRegistry and exposes
their capabilities as dynamically available tools via NodeTool.

- Add src/gateway/nodes.rs: WebSocket endpoint, NodeRegistry, protocol
- Add src/tools/node_tool.rs: Tool trait wrapper for node capabilities
- Add NodesConfig to config schema (disabled by default)
- Wire /ws/nodes route into gateway router
- Add NodeRegistry to AppState and all test constructions
- Re-export NodesConfig and NodeTool from module roots

Closes #3093
2026-03-13 18:23:48 -04:00
Marcelo Correa 2e2c1da4fa fix(cron): skip unparseable job rows instead of aborting the scheduler (#3405)
A single cron job with a malformed `next_run` timestamp in the database
was silently stopping all scheduled jobs. The `due_jobs` query matched
rows whose `next_run` was lexicographically past-due (including
non-RFC3339 values like "2026-03-12 03:11:13" which sort before valid
RFC3339 strings), then `map_cron_job_row` failed to parse the timestamp,
the `row?` propagation caused `due_jobs` to return `Err`, and the
scheduler marked itself as `error` and skipped every subsequent tick —
taking down all other healthy jobs with it.

The fix changes the row iteration in `due_jobs` to log a warning and
skip unparseable rows rather than aborting the entire result set. Valid
jobs continue to fire; the broken row is surfaced in the logs without
collateral damage to the scheduler.

Co-authored-by: ZeroClaw <zeroclaw@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 18:17:08 -04:00
SimianAstronaut7 736347c71b fix(workflows): use RELEASE_TOKEN for beta release tag creation (#3366)
The default GITHUB_TOKEN cannot bypass the "Release Tags - Restricted
Operators" ruleset, causing beta releases to fail with a 422 validation
error. Switch to a PAT stored as RELEASE_TOKEN that has bypass permissions.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 18:09:03 -04:00
Argenis c384c34c31 feat(provider): support custom API path suffix for custom: endpoints (#3447)
* feat(provider): support custom API path suffix for custom: endpoints

Allow users to configure a custom API path for custom/compatible
providers instead of hardcoding /v1/chat/completions. Some self-hosted
LLM servers use different API paths.

Adds an optional `api_path` field to:
- Config (top-level and model_providers profile)
- ProviderRuntimeOptions
- OpenAiCompatibleProvider

When set, the custom path is appended to base_url instead of the
default /chat/completions suffix.

Closes #3125

* fix: add missing api_path field to test ModelProviderConfig initializers
2026-03-13 17:54:21 -04:00
Argenis 4ca5fa500b feat(web): preserve message draft in agent chat across view switches (#3443)
Add an in-memory DraftContext that persists textarea content when the
AgentChat component unmounts due to route navigation. The draft is
restored when the user returns to the chat view. The store is
session-scoped (not localStorage) so drafts are cleared on page reload.

Closes #3129
2026-03-13 17:40:23 -04:00
Argenis 1a0441a006 feat(web): electric blue dashboard restyle with animations and logo (#3445)
Restyle the entire web dashboard with an electric blue theme featuring
glassmorphism cards, smooth animations, and the ZeroClaw logo. Remove
duplicate Vite dev server infrastructure to ensure a single gateway.

- Add electric blue color palette and glassmorphism styling system
- Add 10+ keyframe animations (fade, slide, pulse-glow, shimmer, float)
- Restyle all 10 pages with glass cards and electric components
- Add ZeroClaw logo to sidebar, pairing screen, and favicon
- Remove Vite dev/preview scripts and proxy config (build-only now)
- Update pairing dialog with ambient glow and animated elements

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 17:27:41 -04:00
Argenis ef770f15b9 feat(tool): on-demand MCP tool loading via tool_search (#3446)
Add deferred MCP tool activation to reduce context window waste.
When mcp.deferred_loading is true (the default), MCP tool schemas
are not eagerly included in the LLM context. Instead, only tool
names appear in an <available-deferred-tools> system prompt section,
and the LLM calls the built-in tool_search tool to fetch full schemas
on demand. Setting deferred_loading to false preserves the existing
eager behavior.

Closes #3095
2026-03-13 17:25:19 -04:00
Argenis 05a0cdf6f4 feat(tools): add Windows support for shell tool_call execution (#3442)
Use `cmd.exe /C` instead of `sh -c` on Windows via cfg(target_os).
Make the shell allowlist, forbidden paths, env vars, risk classification,
and path detection platform-aware so the shell tool works correctly on
Windows without changing Unix behavior.

Closes #3327

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 17:12:16 -04:00
Argenis fc1b555b31 feat(matrix): add read markers and typing notifications (#3441)
Send a read receipt after receiving each message, start a typing
notification while processing, and stop it before sending the response.
This gives Matrix users visual feedback that the bot has seen their
message and is working on a reply.

Closes #3357

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 17:06:48 -04:00
Argenis b8ebe7bcd3 feat(gateway): add cron run history API and dashboard panel (#3440)
Add GET /api/cron/{id}/runs?limit=N endpoint that returns recent stored
runs for a cron job, with server-side limit clamping to 1-100 (default 20).

Frontend adds a CronRun type, API client function, and an expandable
run history panel on the Cron page showing status, timestamps, duration,
and output for each run, with loading, empty, error, and refresh states.

Closes #3299

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 17:06:44 -04:00
Argenis 049edf4eec feat(channel): add WeCom (WeChat Enterprise) Bot Webhook channel (#3439)
Implement the Channel trait for WeCom Bot Webhook, supporting
outbound text messages via the WeCom webhook API. The channel
is send-only; inbound messages can be routed through the gateway
webhook subsystem.

Closes #3396
2026-03-13 16:44:34 -04:00
Argenis 856a651dd1 feat(channels): add ack_reactions config to disable channel reactions (#3438)
Users can now set `ack_reactions = false` in `[channels_config]` to
suppress the 👀//⚠️ acknowledgement reactions on incoming messages.
The option defaults to `true`, preserving existing behavior.

Closes #3403
2026-03-13 16:43:47 -04:00
Argenis fd3b17edc1 feat(docker): add Debian-based container variant with shell tools (#3437)
The default distroless image has no shell, preventing the agent from
using shell-based tools (pwd, ls, git, etc.). Add Dockerfile.debian
that uses debian:bookworm-slim as the runtime base and includes bash,
git, curl, and ca-certificates. The existing distroless Dockerfile
remains unchanged for security-conscious deployments.

Closes #3359
2026-03-13 16:40:29 -04:00
Argenis a9c7697c6b fix(slack): subscribe to thread message events in polling mode (#3435)
The polling-based Slack listener only called conversations.history, which
returns top-level channel messages but not thread replies. Users replying
inside a thread were invisible to the bot after its initial response.

Add conversations.replies polling for active threads discovered in
channel history. Track thread parents with reply_count > 0, periodically
fetch new replies, and emit them as ChannelMessage with the correct
thread_ts so the bot can continue multi-turn conversations in-thread.
Stale threads are evicted after 24 hours or when the tracker exceeds
50 entries.

Closes #3084
2026-03-13 16:33:26 -04:00
Argenis 939edf5e86 fix: expose MCP tools to delegate subagents (#3436)
MCP tools were not visible to delegate subagents because parent_tools
was a static snapshot taken before MCP tool wiring. Switch to interior
mutability (parking_lot::RwLock) so MCP wrappers pushed after
DelegateTool construction are visible at sub-agent execution time.

Closes #3069
2026-03-13 16:26:01 -04:00
Alix-007 bbc82fd4f9 fix(observability): support verbose backend selection (#3374)
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 16:15:43 -04:00
Argenis a606f308f5 fix(security): respect allowed_roots in tool-level path pre-checks (#3434)
When workspace_only=true and allowed_roots is configured, several tools
(file_read, content_search, glob_search) rejected absolute paths before
the allowed_roots allowlist was consulted. Additionally, tilde paths
(~/...) passed is_path_allowed but were then incorrectly joined with
workspace_dir as literal relative paths.

Changes:
- Add SecurityPolicy::resolve_tool_path() to properly expand tilde
  paths and handle absolute vs relative path resolution for tools
- Add SecurityPolicy::is_under_allowed_root() for tool pre-checks to
  consult the allowed_roots allowlist before rejecting absolute paths
- Update file_read to use resolve_tool_path instead of workspace_dir.join
- Update content_search and glob_search absolute-path pre-checks to
  allow paths under allowed_roots
- Add tests covering workspace_only + allowed_roots scenarios

Closes #3082
2026-03-13 16:15:30 -04:00
Argenis 1162e3adc2 fix(channel): resolve Matrix channel compilation errors with channel-matrix feature (#3433)
Add missing Relation import from ruma events::room::message, remove
unused InReplyTo import, suppress unused matrix_skip_context variable,
and fix additional clippy lints (split_once, single-char patterns,
collapsible replace, wildcard match, ignored unit pattern).

Closes #3425
2026-03-13 15:45:28 -04:00
Alix-007 bd75799644 fix(build): unblock strict 32-bit no-default-features builds (#3375)
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 15:45:03 -04:00
Argenis 35217bf457 fix: use cfg-conditional AtomicU32 fallback for 32-bit targets in mcp_client (#3432)
PR #3409 fixed AtomicU64 usage on 32-bit targets in other files but
missed src/tools/mcp_client.rs. Apply the same cfg(target_has_atomic)
pattern used in channels/irc.rs to conditionally select AtomicU64 vs
AtomicU32.

Closes #3430
2026-03-13 15:33:31 -04:00
Alix-007 e5e3761020 fix(cron): support Matrix announce delivery (#3373)
* fix(cron): support Matrix announce delivery

* fix(cron): expose Matrix delivery in tool schemas
2026-03-13 15:16:10 -04:00
Vernon Stinebaker a52446c637 feat(agent): add tool_filter_groups for per-turn MCP tool schema filtering (#3395)
* feat(agent): add tool_filter_groups for per-turn MCP tool schema filtering

Introduces per-turn MCP tool schema filtering to reduce token overhead when
many MCP tools are registered. Filtering is driven by a new config field
`agent.tool_filter_groups`, which is a list of named groups that each
specify tool glob patterns and an activation mode (`always` or `dynamic`).

Built-in (non-MCP) tools always pass through unchanged; the feature is fully
backward-compatible — an empty `tool_filter_groups` list (the default) leaves
all existing behaviour untouched.

Changes:
- src/config/schema.rs: add `ToolFilterGroupMode`, `ToolFilterGroup` types
  and `tool_filter_groups` field on `AgentConfig`
- src/config/mod.rs: re-export `ToolFilterGroup`, `ToolFilterGroupMode`
- src/agent/loop_.rs: add `glob_match()`, `filter_tool_specs_for_turn()`,
  `compute_excluded_mcp_tools()` helpers; wire call sites in both single-shot
  and interactive REPL modes; add unit tests for all three functions
- docs/reference/api/config-reference.md: document `tool_filter_groups`
  field and sub-table schema with example
- docs/i18n/el/config-reference.md: add Greek locale config-reference with
  `tool_filter_groups` section (2026-03-12 update)

* Remove accidentally committed worktree directories

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 14:23:57 -04:00
Vernon Stinebaker 292952e563 feat(tools/mcp): add MCP subsystem tools layer with multi-transport client (#3394)
* feat(tools/mcp): add MCP subsystem tools layer with multi-transport client

Introduces a new MCP (Model Context Protocol) subsystem to the tools layer,
providing a multi-transport client implementation (stdio, HTTP, SSE) that
allows ZeroClaw agents to connect to external MCP servers and register their
exposed tools into the runtime tool registry.

New files:
- src/tools/mcp_client.rs: McpRegistry — lifecycle manager for MCP server connections
- src/tools/mcp_protocol.rs: protocol types (request/response/notifications)
- src/tools/mcp_tool.rs: McpToolWrapper — bridges MCP tools to ZeroClaw Tool trait
- src/tools/mcp_transport.rs: transport abstraction (Stdio, Http, Sse)

Wiring changes:
- src/tools/mod.rs: pub mod + pub use for new MCP modules
- src/config/schema.rs: McpTransport, McpServerConfig, McpConfig types; mcp field
  on Config; validate_mcp_config; mcp unit tests
- src/config/mod.rs: re-exports McpConfig, McpServerConfig, McpTransport
- src/channels/mod.rs: MCP server init block in start_channels()
- src/agent/loop_.rs: MCP registry init in run() and process_message()
- src/onboard/wizard.rs: mcp: McpConfig::default() in both wizard constructors

* fix(tools/mcp): inject MCP tools after built-in tool filter, not before

MCP servers are user-declared external integrations. The built-in
agent.allowed_tools / agent.denied_tools filter (filter_primary_agent_tools_or_fail)
governs built-in tool governance only. Injecting MCP tools before that
filter would silently drop all MCP tools when a restrictive allowlist is
configured.

Add ordering comments at both call sites (run() CLI path and
process_message() path) to make this contract explicit for reviewers
and future merges.

Identified via: shady831213/zeroclaw-agent-mcp@3f90b78

* fix(tools/mcp): strip approved field from MCP tool args before forwarding

ZeroClaw's security model injects `approved: bool` into built-in tool
args for supervised-mode confirmation. MCP servers have no knowledge of
this field and reject calls that include it as an unexpected parameter.

Strip `approved` from object-typed args in McpToolWrapper::execute()
before forwarding to the MCP server. Non-object args pass through
unchanged (no silent conversion or rejection).

Add two unit tests:
- execute_strips_approved_field_from_object_args: verifies removal
- execute_handles_non_object_args_without_panic: verifies non-object
  shapes are not broken by the stripping logic

Identified via: shady831213/zeroclaw-agent-mcp@c68be01

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
2026-03-13 14:23:48 -04:00
SimianAstronaut7 d115b28a1f fix(daemon): expand tilde to home directory in file paths (#3424)
Rust treats `~` as a literal path character, not a home directory
shorthand. Several config resolution paths used `PathBuf::from()` on
user-provided strings without expanding `~` first, causing a literal
`~` folder to be created in the working directory.

Apply `shellexpand::tilde()` to all user-facing path inputs:
- ZEROCLAW_CONFIG_DIR env var (config/schema.rs, onboard/wizard.rs)
- ZEROCLAW_WORKSPACE env var (config/schema.rs, onboard/wizard.rs,
  channels/matrix.rs)
- active_workspace.toml marker file config_dir (config/schema.rs)

The WhatsApp Web session_path was already correctly expanded via
shellexpand::tilde() in whatsapp_web.rs.

Closes #3417

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 14:22:57 -04:00
Jacobinwwey b563f5954e fix(llamacpp): send responses fallback history in llama.cpp-compatible item shape (#3391)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 14:21:17 -04:00
SimianAstronaut7 e3e711073a feat(providers): support custom HTTP headers for LLM API requests (#3423)
Add `extra_headers` config field and `ZEROCLAW_EXTRA_HEADERS` env var
support so users can specify custom HTTP headers for provider API
requests. This enables connecting to providers that require specific
headers (e.g., User-Agent, HTTP-Referer, X-Title) without a reverse
proxy.

Config file headers serve as the base; env var headers override them.
Format: `Key:Value,Key2:Value2`

Closes #3189

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-13 14:15:42 -04:00
SimianAstronaut7 1033287f38 fix(gateway): skip pairing dialog when require_pairing is disabled (#3422)
When `require_pairing = false` in config, the dashboard showed the
pairing dialog even though no pairing code exists, creating a deadlock.

Add `requiresPairing` field to AuthState (defaults to `true` as a safe
fallback) and update it from the `/health` endpoint response. Gate the
pairing dialog in App.tsx on both `!isAuthenticated` and
`requiresPairing` so the dashboard loads directly when pairing is
disabled.

Closes #3267

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 14:08:59 -04:00
Argenis 6a4ccaeb73 fix: strip stale tool_result from conversation history and memory context (#3418)
Prevent orphan `<tool_result>` blocks from leaking into LLM sessions:

- Strip `<tool_result>` blocks from cached prior turns in
  `process_channel_message` so the LLM never sees a tool result
  without a preceding tool call (Case A — in-memory accumulation).
- Skip memory entries containing `<tool_result` in both
  `should_skip_memory_context_entry` (channel path) and
  `build_context` (agent path) so SQLite-recalled tool output
  is never injected as memory context (Case B — post-restart).

Closes #3402
2026-03-13 09:55:57 -04:00
Argenis 4aead04916 fix: skip documentation URLs in cloudflare tunnel URL parser (#3416)
The URL parser captured the first https:// URL found in cloudflared
stderr output. When cloudflared emits a quic-go UDP buffer warning
containing a github.com link, that documentation URL was incorrectly
captured as the tunnel's public URL.

Extract URL parsing into a testable helper function that skips known
documentation domains (github.com, cloudflare.com/docs,
developers.cloudflare.com) and recognises tunnel-specific log prefixes
("Visit it at", "Route at", "Registered tunnel connection") and the
.trycloudflare.com domain.

Closes #3413
2026-03-13 09:40:02 -04:00
Argenis 7b23c8934c docs: clarify master-only branch policy and clean up stale references (#3415)
Add a prominent migration notice to CONTRIBUTING.md with explicit
instructions for contributors who still have local or forked main
branches. Fix the last remaining main branch reference in
python/pyproject.toml. Stale merged branches and main-related remote
branches have been deleted.

Refs: #2929, #3061
2026-03-13 09:38:11 -04:00
Argenis 5d1543100d fix: support Linq 2026-02-03 webhook payload shape (#3337) (#3410)
Closes #3337

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:31:53 -04:00
Argenis e3a91bc805 fix: gate prometheus and fix AtomicU64 for 32-bit targets (#3409)
* fix: gate prometheus and fix AtomicU64 for 32-bit targets (#3335)

Closes #3335

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* style: fix import ordering for cfg-gated atomics

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:31:25 -04:00
Argenis 833fdefbe5 fix: restore MCP support missing from master branch (#3412)
MCP (Model Context Protocol) config and tool modules were added on the
old `main` branch but never made it to `master`. This restores the full
MCP subsystem: config schema, transport layer (stdio/HTTP/SSE), client
registry, tool wrapper, config validation, and channel wiring.

Closes #3379

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:20:37 -04:00
Argenis 13f74f0ecc fix: restore web dashboard by auto-building frontend in build.rs (#3408)
The build.rs was reduced to only creating an empty web/dist/ directory,
which meant rust-embed would embed no files and the SPA fallback handler
would return 404 for every request including `/`. This is a regression
from v0.1.8 where web/dist/ was still tracked in git.

Update build.rs to detect when web/dist/index.html is missing and
automatically run `npm ci && npm run build` if npm is available. The
build is best-effort: when Node.js is absent the Rust build still
succeeds with an empty dist directory (release workflows pre-build the
frontend separately).

Closes #3386

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:20:34 -04:00
Argenis 9ff045d2e9 fix: resolve install.sh prebuilt download 404 by querying releases API (#3406)
The /releases/latest/download/ URL only resolves to the latest non-prerelease,
non-draft release. When that release has no binary assets (e.g. v0.1.9a),
--prebuilt-only fails with a 404. This adds resolve_asset_url() which queries
the GitHub releases API for the newest release (including prereleases) that
actually contains the requested asset, falling back to /releases/latest/ if
the API call fails.

Closes #3389

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:20:31 -04:00
Argenis 6fe8e3a5bb fix: gracefully handle reasoning_enabled for unsupported Ollama models (#3411)
When reasoning_enabled is configured, the Ollama provider sends
think=true to all models. Models that don't support the think parameter
(e.g. qwen3.5:0.8b) cause request failures that the reliable provider
classifies as retryable, leading to an infinite retry loop.

Fix: when a request with think=true fails, automatically retry once
with think omitted. This lets the call succeed on models that lack
reasoning support while preserving thinking for capable models.

Closes #3183
Related #850

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:16:03 -04:00
Argenis 5dc1750df7 fix: add crypto.randomUUID fallback for older browsers (#3407)
Replace direct `crypto.randomUUID()` calls in the web dashboard with a
`generateUUID()` utility that falls back to a manual UUID v4 implementation
using `crypto.getRandomValues()` when `randomUUID` is unavailable (older
Safari, some Electron builds, Raspberry Pi browsers).

Closes #3303
Closes #3261

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 09:16:00 -04:00
SimianAstronaut7 b40c9e77af Merge pull request #3365 from zeroclaw-labs/ci/fix-glibc-cache-mismatch
ci: pin release workflows to ubuntu-latest to fix glibc cache mismatch
2026-03-12 17:44:42 -04:00
simianastronaut 34cac3d9dd ci: pin release workflows to ubuntu-latest to fix glibc cache mismatch
CI workflows use ubuntu-latest (24.04, glibc 2.39) while release
workflows used ubuntu-22.04 (glibc 2.35). Swatinem/rust-cache keys
on runner.os ("Linux"), not the specific version, so cached build
scripts compiled on 24.04 would fail on 22.04 with GLIBC_2.39 not
found errors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 17:37:31 -04:00
SimianAstronaut7 badf96dcab Merge pull request #3363 from zeroclaw-labs/ci/faster-apple-build
ci: use thin LTO profile for faster CI builds
2026-03-12 17:31:29 -04:00
simianastronaut c1e1228fb0 ci: use thin LTO profile for faster CI builds
The release profile uses fat LTO + codegen-units=1, which is
optimal for distribution binaries but unnecessarily slow for CI
validation builds. Add a dedicated `ci` profile with thin LTO and
codegen-units=16, and use it in both CI workflows.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 17:18:36 -04:00
SimianAstronaut7 d2b923ae07 Merge pull request #3322 from zeroclaw-labs/work-issues/2984-fix-cli-chinese-input-crash
fix(agent): use byte-level stdin reads to prevent CJK input crash
2026-03-12 16:58:46 +00:00
SimianAstronaut7 21fdef95f4 Merge pull request #3324 from zeroclaw-labs/work-issues/2907-channel-send-message
feat(channel): add `channel send` CLI command for outbound messages
2026-03-12 16:58:44 +00:00
SimianAstronaut7 d02fbf2d76 Merge pull request #3326 from zeroclaw-labs/work-issues/2978-tool-call-dedup-exempt
feat(agent): add tool_call_dedup_exempt config to bypass within-turn dedup
2026-03-12 16:58:41 +00:00
SimianAstronaut7 05cede29a8 Merge pull request #3328 from zeroclaw-labs/fix/2926-configurable-provider-timeout
feat(provider): make HTTP request timeout configurable
2026-03-12 16:58:38 +00:00
SimianAstronaut7 d34a2e6d3f Merge pull request #3329 from zeroclaw-labs/work-issues/2443-approval-manager-shadowed-binding
fix(channel): remove shadowed variable bindings in test functions
2026-03-12 16:58:35 +00:00
SimianAstronaut7 576d22fedd Merge pull request #3330 from zeroclaw-labs/work-issues/2884-ws-token-query-param
fix(gateway): restore multi-source WebSocket auth token extraction
2026-03-12 16:58:32 +00:00
SimianAstronaut7 d5455c694c Merge pull request #3332 from zeroclaw-labs/work-issues/2896-discord-ws-silent-stop
fix(channel): handle websocket Ping frames and read errors in Discord gateway
2026-03-12 16:58:30 +00:00
SimianAstronaut7 90275b057e Merge pull request #3339 from zeroclaw-labs/work-issues/2880-fix-workspace-path-blocked
fix(security): allow absolute paths within workspace when workspace_only is set
2026-03-12 16:58:27 +00:00
SimianAstronaut7 d46b4f29d2 Merge pull request #3341 from zeroclaw-labs/work-issues/2403-telegram-photo-duplicate
fix(channel): prevent first-turn photo duplication in memory context
2026-03-12 16:58:23 +00:00
simianastronaut f25835f98c fix(channel): prevent first-turn photo duplication in memory context (#2403)
When auto_save is enabled and a photo is sent on the first turn of a
Telegram session, the [IMAGE:] marker was duplicated because:

1. auto_save stores the photo message (including the marker) to memory
2. build_memory_context recalls the just-saved entry as relevant context
3. The recalled marker is prepended to the original message content

Fix: skip memory context entries containing [IMAGE:] markers in
should_skip_memory_context_entry so auto-saved photo messages are not
re-injected through memory context enrichment.

Closes #2403

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:03:29 -04:00
simianastronaut 376579f9fa fix(security): allow absolute paths within workspace when workspace_only is set (#2880)
When workspace_only=true, is_path_allowed() blanket-rejected all
absolute paths.  This blocked legitimate tool calls that referenced
files inside the workspace using an absolute path (e.g. saving a
screenshot to /home/user/.zeroclaw/workspace/images/example.png).

The fix checks whether an absolute path falls within workspace_dir or
any configured allowed_root before rejecting it, mirroring the priority
order already used by is_resolved_path_allowed().  Paths outside the
workspace and allowed roots are still blocked, and the forbidden-paths
list continues to apply to all other absolute paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 11:59:35 -04:00
simianastronaut b620fd6bba fix(channel): handle websocket Ping frames and read errors in Discord gateway
The Discord gateway event loop silently dropped websocket Ping frames
(via a catch-all `_ => continue`) without responding with Pong. After
splitting the websocket stream into read/write halves, automatic
Ping/Pong handling is disabled, so the server-side (Cloudflare/Discord)
eventually considers the client unresponsive and stops sending events.

Additionally, websocket read errors (`Some(Err(_))`) were silently
swallowed by the same catch-all, preventing reconnection on transient
failures.

This patch:
- Responds to `Message::Ping` with `Message::Pong` to maintain the
  websocket keepalive contract
- Breaks the event loop on `Some(Err(_))` with a warning log, allowing
  the supervisor to reconnect

Closes #2896

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 11:15:53 -04:00
simianastronaut 98d6c5af9e fix(gateway): restore multi-source WebSocket auth token extraction (#2884)
The Electric Blue dashboard (PR #2804) sends the pairing token as a
?token= query parameter, but the WS handler only checked that single
source. Earlier PR #2193 had established a three-source precedence
chain (header > subprotocol > query param) which was lost.

Add extract_ws_token() with the documented precedence:
  1. Authorization: Bearer <token> header
  2. Sec-WebSocket-Protocol: bearer.<token> subprotocol
  3. ?token=<token> query parameter

This ensures browser-based clients (which cannot set custom headers)
can authenticate via query param or subprotocol, while non-browser
clients can use the standard Authorization header.

Includes 9 unit tests covering each source, precedence ordering,
and empty-value fallthrough.

Closes #2884

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 11:01:26 -04:00
simianastronaut c51ca19dc1 fix(channel): remove shadowed variable bindings in test functions (#2443)
Rename shadowed `histories` and `store` bindings in three test functions
to eliminate variable shadows that are flagged under stricter lint
configurations (clippy::shadow_unrelated). The initial bindings are
consumed by struct initialization; the second bindings that lock the
mutex guard are now named distinctly (`locked_histories`, `cleanup_store`).

Closes #2443

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 11:00:31 -04:00
simianastronaut ea6abc9f42 feat(provider): make HTTP request timeout configurable (#2926)
The provider HTTP request timeout was hardcoded at 120 seconds in
`OpenAiCompatibleProvider::http_client()`. This makes it configurable
via the `provider_timeout_secs` config key and the
`ZEROCLAW_PROVIDER_TIMEOUT_SECS` environment variable, defaulting
to 120s for backward compatibility.

Changes:
- Add `provider_timeout_secs` field to Config with serde default
- Add `ZEROCLAW_PROVIDER_TIMEOUT_SECS` env var override
- Add `timeout_secs` field and `with_timeout_secs()` builder on
  `OpenAiCompatibleProvider`
- Add `provider_timeout_secs` to `ProviderRuntimeOptions`
- Thread config value through agent loop, channels, gateway, and tools
- Use `compat()` closure in provider factory to apply timeout to all
  compatible providers

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 10:40:18 -04:00
simianastronaut e2f6f20bfb feat(agent): add tool_call_dedup_exempt config to bypass within-turn dedup (#2978)
Add `agent.tool_call_dedup_exempt` config key (list of tool names) to
allow specific tools to bypass the within-turn identical-signature
deduplication check in run_tool_call_loop. This fixes the browser
snapshot polling use case where repeated calls with identical arguments
are legitimate and should not be suppressed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 10:28:42 -04:00
simianastronaut 88df3d4b2e feat(channel): add channel send CLI command for outbound messages (#2907)
Add a `zeroclaw channel send` subcommand that sends a one-off message
through a configured channel without starting the full agent loop.
This enables hardware sensor triggers (e.g., range sensors on
Raspberry Pi) to push notifications to Telegram and other platforms.

Usage:
  zeroclaw channel send 'Alert!' --channel-id telegram --recipient <chat_id>

Supported channels: telegram, discord, slack.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 10:18:04 -04:00
simianastronaut 0dba55959d fix(agent): use byte-level stdin reads to prevent CJK input crash
When running through a PTY chain (kubectl exec, SSH, remote terminals),
the transport layer may split data frames at space (0x20) boundaries,
interrupting multi-byte UTF-8 characters mid-sequence. Rust's
BufRead::read_line requires valid UTF-8 and returns InvalidData
immediately, crashing the interactive agent loop.

Replace stdin().read_line() with byte-level read_until(b'\n') followed
by String::from_utf8_lossy() in both the main input loop and the
/clear confirmation prompt. This reads raw bytes without UTF-8
validation during transport, then does lossy conversion (replacing any
truly invalid bytes with U+FFFD instead of crashing).

Also set ENV LANG=C.UTF-8 in both Dockerfile stages as defense-in-depth
to ensure the container locale defaults to UTF-8.

Closes #2984

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 10:08:49 -04:00
SimianAstronaut7 893788f04d fix(security): strip URLs before high-entropy token extraction (#3064) (#3321)
The credential leak detector's check_high_entropy_tokens would
false-positive on URL path segments (e.g. long alphanumeric filenames)
because extract_candidate_tokens included '/' in the token character
set, creating long mixed-alpha-digit tokens that exceeded the Shannon
entropy threshold.

Fix: strip URLs from content before extracting candidate tokens for
entropy analysis. Structural pattern checks (API keys, JWTs, AWS
credentials) use dedicated regexes and are unaffected.

Closes #3064

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:38 +00:00
SimianAstronaut7 0fea62d114 fix(tool): resolve Brave API key lazily with decryption support (#3078) (#3320)
WebSearchTool previously stored the Brave API key once at boot and never
re-read it. This caused three failures: (1) keys set after boot via
web_search_config were ignored, (2) encrypted keys passed as raw enc2:
blobs to the Brave API, and (3) keys absent at startup left the tool
permanently broken.

The fix adds lazy key resolution at execution time. A fast path returns
the boot-time key when it is plaintext and non-empty. When the boot key
is missing or still encrypted, the tool re-reads config.toml, decrypts
the value through SecretStore, and uses the result. This also means
runtime config updates (e.g. `web_search_config set brave_api_key=...`)
are picked up on the next search invocation.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:35 +00:00
SimianAstronaut7 cca3cf8f84 fix(agent): use char-boundary-safe slicing in scrub_credentials (#3024) (#3319)
Replace byte-level `&val[..4]` slice with `char_indices().nth(4)` to
prevent a panic when the captured credential value contains multi-byte
UTF-8 characters (e.g. Chinese text).  Adds a regression test with
CJK input.

Closes #3024

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:32 +00:00
SimianAstronaut7 7170810e98 fix(channel): drop MutexGuard before .await in WhatsApp Web listen (#3315)
Extract `self.bot_handle.lock().take()` into a separate `let` binding
so the parking_lot::MutexGuard is dropped before the `.await`, making
the listen future Send again.

Closes #3312

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:16:40 +00:00
SimianAstronaut7 816fa87d60 fix(channel): restore Lark/Feishu channel compilation (#3318)
Replace the `use_feishu: bool` field on `LarkChannel` with a
`platform: LarkPlatform` enum field, add `mention_only` to the
`new_with_platform` constructor, and introduce `from_lark_config` /
`from_feishu_config` factory methods so the channel factory in
`mod.rs` and the existing tests compile.

Resolves #3302

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:16:31 +00:00
Argenis dcffa4d7fb fix(ci): add missing ci-run.yml workflow (#3268)
The ci-run.yml workflow was referenced in docs/contributing/ci-map.md
and branch protection rules but never existed in the repository,
causing push-triggered CI runs to fail immediately with zero jobs
and no logs.

This adds the workflow with all documented jobs: lint, strict delta
lint, test, build (linux + macOS), docs quality, and the CI Required
Gate composite check. Triggers on both push and pull_request to master.

Fixes #2853

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:48:16 +00:00
Argenis e03dc4bfce fix(security): unify cron shell validation across API/CLI/scheduler (#3270)
Centralize cron shell command validation so all entrypoints enforce the
same security policy (allowlist + risk gate + approval) before
persistence and execution.

Changes:
- Add validate_shell_command() and validate_shell_command_with_security()
  as the single validation gate for all cron shell paths
- Add add_shell_job_with_approval() and update_shell_job_with_approval()
  that validate before persisting
- Add add_once_validated() and add_once_at_validated() for one-shot jobs
- Make raw add_shell_job/add_job/add_once/add_once_at pub(crate) to
  prevent unvalidated writes from outside the cron module
- Route gateway API through validated creation path
- Route schedule tool through validated helpers (single validation)
- Route cron_add/cron_update tools through validated helpers
- Unify scheduler execution validation via validate_shell_command_with_security
- CLI update handler uses full validate_command_execution instead of
  just is_command_allowed
- Add focused tests for validation parity across entrypoints
- Standardize error format to "blocked by security policy: {reason}"

Closes #2741
Closes #2742

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:48:13 +00:00
Darren.Zeng 2cb57e6b2d fix(cli): honor config default_temperature in agent command (#3106)
* fix(cli): honor config default_temperature in agent command

Fixes #3033

The agent command was using a hardcoded default_value of 0.7 for the
--temperature parameter, which ignored the default_temperature setting
in the config file.

Changes:
- Changed temperature from f64 to Option<f64>
- Removed hardcoded default_value
- Use config.default_temperature when --temperature is not provided

Users can now set default_temperature in config.toml and have it
honored when running 'zeroclaw agent' without --temperature.

Risk: low (behavior change: now honors config instead of hardcoded value)

* fix: resolve moved value error for config in agent command

Extract final_temperature before passing config to agent::run() to
avoid use-of-moved-value error (config is moved as the first argument
while config.default_temperature was being accessed in the same call).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 00:34:32 -04:00
Alix-007 079e972c81 fix(release): use master in cut_release_tag helper (#3249)
Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 00:20:36 -04:00
Argenis 448682c440 feat(providers): add Azure OpenAI provider support (#3246)
Closes #3176
2026-03-12 00:06:11 -04:00
Argenis 3fe3fe23b1 feat(build): add 32-bit system support via feature gates (#3245)
Closes #3174
2026-03-12 00:06:08 -04:00
Alix-007 9e9052634d fix(ci): use master merge-base in collect_changed_links (#3250)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:45:36 -04:00
Alix-007 b227b1abc3 fix(docs): correct default branch in triage snapshot (#3253)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:45:34 -04:00
Alix-007 fb9501afd5 fix(install): guard empty docker namespace args on Bash 3.2 (#3248)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:43:21 -04:00
Alix-007 9df0a76f4a fix(ci): use master merge-base in docs_quality_gate (#3251)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:43:19 -04:00
Alix-007 9501555448 fix(ci): use master merge-base in rust_strict_delta_gate (#3252)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:43:16 -04:00
Alix-007 138ada0fd6 fix(release): lower GNU Linux build runner baseline (#3257)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:43:14 -04:00
Darren.Zeng 3c2c5aa78c feat(web): add auto-expanding multiline chat composer (#3185)
* feat(web): add auto-expanding multiline chat composer

Convert chat input from single-line input to auto-expanding textarea:

- Replace input with textarea for multiline support
- Add auto-resize logic that grows up to max height (200px)
- Enter sends message, Shift+Enter adds new line
- Reset height after sending message
- Add overflow-y-auto for scrolling when max height reached
- Update placeholder to indicate Shift+Enter for new line

Fixes #3119

* fix: run cargo fmt to fix lint CI

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: trigger CI re-run

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 23:35:38 -04:00
ImanHashemi 273bd00d08 feat(hooks): add webhook-audit builtin hook (#3212)
Co-authored-by: ImanHashemi
2026-03-11 23:34:17 -04:00
Darren.Zeng 546873e2bb feat(cargo): add channel-feishu feature alias for channel-lark (#3105)
Fixes #3012

Adds a feature alias `channel-feishu` that maps to `channel-lark`.
This makes it easier for Feishu users to discover and enable the feature,
as Lark and Feishu are the same platform with different branding.

Users can now use either:
  cargo install zeroclaw --features channel-lark
  cargo install zeroclaw --features channel-feishu

Risk: low (feature alias only, no functional change)

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:28:03 -04:00
Darren.Zeng bf7f568eef docs(contributing): update contributing guide (#3109)
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:28:01 -04:00
Darren.Zeng d33b73974b feat(provider): add vision support for Anthropic provider (#3177)
Add vision support to Anthropic provider to enable image understanding:

- Add ImageSource struct for Anthropic's image content block format
- Add Image variant to NativeContentOut enum
- Implement capabilities() returning vision: true
- Update convert_messages() to parse [IMAGE:...] markers and convert
them to Anthropic's native image content blocks
- Support both data URIs and local file paths
- Add comprehensive tests for vision functionality

Fixes #3163

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:27:58 -04:00
Darren.Zeng a99ec631aa feat(web): add per-message copy button on hover (#3178)
* feat(web): add per-message copy button on hover

Add a copy affordance to each chat message that appears on hover/focus:

- Add MessageCopyButton component with clipboard integration
- Button appears on hover/focus for both user and agent messages
- Shows checkmark icon briefly after successful copy
- Graceful fallback for non-secure contexts
- Keyboard accessible with focus state
- Does not disrupt message layout or text selection

Fixes #3120

* fix: run cargo fmt to fix lint CI

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: trigger CI re-run

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: argenis de la rosa <theonlyhennygod@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 23:27:56 -04:00
Darren.Zeng ab6846cb9f feat(channel): make email subject configurable (#3190)
Add default_subject field to EmailConfig to allow users to customize
the default subject line for outgoing emails. Previously hardcoded as
"ZeroClaw Message".

- Add default_subject field with serde default
- Update send() method to use configured default
- Add tests for new functionality

Fixes #2878

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:27:52 -04:00
Darren.Zeng ca683816a7 fix(config): fix web_fetch allowed_domains serde default (#3192)
When [web_fetch] section was specified in config without explicit
allowed_domains, serde used Vec::default() (empty vector) instead of
the wildcard ["*"] default. This caused all web fetch requests to be
rejected unexpectedly.

Fix by adding explicit serde default function that returns vec!["*"].

Fixes #2941

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 23:27:49 -04:00
Argenis 59014641bf fix(web): update rollup to patch path traversal vulnerability (#3258)
Updates rollup to 4.59.0 to fix CVE-2026-27606, an arbitrary file write
via path traversal vulnerability (GHSA-mw96-cpmx-2vgc).
Resolves Dependabot alert #2.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 23:24:04 -04:00
Darren.Zeng 8d7abb73e7 fix(config): accept openai-* aliases for wire_api config (#3191)
Add support for openai-responses, open-ai-responses, openai-chat-completions,
and open-ai-chat-completions as aliases for wire_api configuration values.
This aligns the parser with documented values that users expect to work.

Fixes #2735
2026-03-11 22:09:50 -04:00
Darren.Zeng 7ddf10f86d feat(onboard): add --reinit flag to prevent accidental config overwrite (#3102)
* feat(onboard): add --reinit flag to prevent accidental config overwrite

Add --reinit flag to onboard command that:
- Backs up existing ~/.zeroclaw directory with timestamp
- Starts fresh initialization after backup
- Requires --interactive mode to work
- Prevents accidental configuration loss

This addresses issue #3013 where onboard could accidentally
overwrite all configuration without warning.

Closes #3013

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(ci): SHA-pin all third-party GitHub Actions

Replace mutable version tags with immutable commit SHAs to prevent
tag-hijacking supply chain attacks (P1 finding).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: retrigger CI after startup_failure

* fix(onboard): address PR #3102 review issues for --reinit flag

- Use resolve_runtime_dirs_for_onboarding() instead of hardcoded ~/.zeroclaw
- Remove unsafe relative path fallback, bail instead
- Add user confirmation prompt before reinitializing config
- Update docs/reference/cli/commands-reference.md with --reinit docs

* style: fix cargo fmt and clippy violations

- Fix import ordering in src/config/mod.rs (rustfmt)
- Collapse single-arg encrypt/decrypt calls in src/config/schema.rs (rustfmt)
- Box::pin large onboard futures to fix clippy::large_futures in src/main.rs

These violations were blocking CI lint checks.

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Simian Astronaut 7 <simianastronaut7@gmail.com>
2026-03-11 22:09:39 -04:00
Darren.Zeng 6352f024b2 docs(readme): add features documentation (#3107) 2026-03-11 22:09:36 -04:00
Darren.Zeng ef2f8e9808 fix(daemon): handle SIGTERM for graceful shutdown (#3193)
The daemon previously only handled SIGINT (Ctrl+C), ignoring SIGTERM
which is the standard termination signal used by Docker, Kubernetes,
and systemd. This caused containers to wait for the grace period
then be force-killed with SIGKILL.

Now the daemon handles both SIGINT and SIGTERM for graceful shutdown.

Fixes #2529
2026-03-11 22:09:34 -04:00
Darren.Zeng 6f482051ec docs: replace main branch references with master (#3194)
The repository uses master as the default branch, but many documentation
files and scripts were referencing main branch URLs that would 404.

Updated all references from zeroclaw-labs/zeroclaw/main to
zeroclaw-labs/zeroclaw/master in README files and documentation.

Fixes #2929
Fixes #3061

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 21:54:08 -04:00
Argenis 8bed9c5485 fix(ci,release): update all scripts from origin/main to origin/master (#3238)
* fix(ci): update rust_strict_delta_gate.sh to reference origin/master

* fix(ci): update docs_quality_gate.sh to reference origin/master

* fix(ci): update collect_changed_links.py to reference origin/master

* fix(release): update cut_release_tag.sh to reference origin/master
2026-03-11 21:53:48 -04:00
Argenis 6861664258 docs: add branching model section to CONTRIBUTING.md (#3237)
Adds a prominent Branching Model section to CONTRIBUTING.md explaining:
- master is the sole default branch (main has been removed)
- Contributors should use feat/* and fix/* branches off master
- Historical context for the migration (refs #2929, #3061, #3194)

Also updates fork workflow instructions to show both feat/ and fix/ prefixes.
2026-03-11 21:53:38 -04:00
Argenis 5cf1c77531 style: fix cargo fmt violations blocking CI lint (#3244)
* style: fix cargo fmt in ollama.rs test

* style: cargo fmt dispatcher.rs

* style: cargo fmt loop_.rs

* style: cargo fmt schema.rs

* style: cargo fmt mod.rs

* style: cargo fmt ws.rs

* style: cargo fmt ollama.rs
2026-03-11 21:53:25 -04:00
Argenis 483d3c0853 fix(ollama): strip <think> tags from Qwen responses and validate tool calls (#3079) (#3241)
- Strip `<think>...</think>` blocks in parse_tool_calls(), XmlToolDispatcher,
  and OllamaProvider before processing tool-call XML
- Add effective_content() fallback: when content is empty after stripping
  think tags, check the thinking field for tool-call XML
- Add strip_think_tags() to ollama.rs, loop_.rs, and dispatcher.rs
- Add comprehensive tests for think-tag stripping and tool-call parsing

Fixes #3079

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 20:35:33 -04:00
Argenis f5cd6baec3 fix(gateway): add /api/integrations/settings, echo WS protocol, persist session ID (#3242)
- #3009: Add handle_api_integrations_settings endpoint returning JSON with
  per-integration enabled/category/status so /api/integrations/settings no
  longer falls through to the SPA static handler.

- #3010: Extract Sec-WebSocket-Protocol header in handle_ws_chat and echo
  back "zeroclaw.v1" via ws.protocols() when the client requests it.

- #3038: Generate a persistent session_id (crypto.randomUUID stored in
  sessionStorage) in the web WS client and pass it as a query parameter.
  Add session_id: Option<String> to WsQuery on the server side so the
  backend can key conversations by session across reconnects.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 20:35:27 -04:00
Argenis dfe0221e49 feat(web): auto-expanding chat textarea and copy-on-hover for messages (#3119, #3120) (#3243)
Replace single-line <input> with auto-expanding <textarea> (min 44px,
max 200px) that resizes on each keystroke. Add a copy button that
appears on hover for every message bubble, showing a checkmark on
successful clipboard write.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 20:35:22 -04:00
Argenis 12cfe48047 fix(config): add channel secrets roundtrip test and gitignore entries (#3126) (#3240)
Add a test verifying Telegram bot_token is encrypted on save and
decrypted on load, and add .gitignore entries for local state backups.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 20:35:11 -04:00
Jaime Linares 0479bfca36 fix(channel): reconnect WhatsApp Web session and show QR on logout (#3045)
Wraps the WhatsApp Web listen() in a reconnect loop. When Event::LoggedOut
fires, the bot is torn down, stale session DB files are deleted, and after
exponential backoff (3s base, 300s cap, max 10 retries) the loop restarts
triggering fresh QR pairing.

- Broadcast channel for logout signaling from event handler to listen loop
- Session file cleanup (primary + WAL + SHM) only on explicit LoggedOut
- Proper resource ordering: client lock, abort handle, drop bot/device
- Tests for retry delay, counter, session purge, and file paths helpers
2026-03-11 20:30:28 -04:00
Argenis 67e581d8ae fix(gateway): add health and pairing proxy routes to vite dev server (#3056)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 20:08:22 -04:00
Argenis 195c7ba919 fix(agent): resolve display text for tool-only turns (#3054) 2026-03-11 20:08:16 -04:00
James Cowan bb66df5276 fix(config): encrypt and decrypt all channel secrets on save/load (#3217)
Adds symmetric encrypt/decrypt calls for all channel secret fields in
Config::save() and Config::load_or_init(). Previously only nostr.private_key
was handled, leaving all other channel secrets (bot_token, app_token,
access_token, api_token, password, etc.) and gateway.paired_tokens stored
as plaintext when secrets.encrypt = true.

Closes #3175, closes #3173.

Co-authored-by: jameslcowan
2026-03-11 20:02:20 -04:00
Thomas Tuffin d47f6703d8 fix: revert Rust bump to 1.93 in Dockerfile (#3208)
Reverts Dockerfile from rust:1.94-slim to rust:1.93-slim.

Rust 1.94 triggers a recursion limit overflow in matrix-sdk 0.16.0
(rust-lang/rust#152942, matrix-org/matrix-rust-sdk#6254). No upstream
fix available yet. CI uses Rust 1.92 and is unaffected.

Fixes #3207
2026-03-11 19:38:36 -04:00
Alan P John 74fe29d772 feat: Add Opencode-go provider (#3113)
Adds opencode-go as a first-class provider with dedicated API endpoint,
env var, onboarding wizard wiring, and test coverage.

CI failures are pre-existing on master (Rust 1.94 formatting/lint changes per #3207).
2026-03-11 19:35:43 -04:00
Argenis 86ca34ac1f fix(tests): update assertions for live tool call notifications (#3232)
The live tool call notifications feature sends extra messages to the
channel when tools are invoked. Update 5 tests that assumed exactly
1 sent message to instead check the last message in the list.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 19:26:45 -04:00
Argenis f0f0f80895 feat(matrix): add multi-room support (#3224)
Enable a single Matrix bot instance to respond in multiple rooms:
- Disable the room_id filter so messages from all joined rooms are processed
- Embed room_id in reply_target as "user||room_id" for routing replies
- Include room_id in channel field for per-room conversation isolation
- Extract room_id from recipient in send() for correct message routing

The configured room_id still serves as a fallback for direct sends
without a "||" separator in the recipient.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 19:13:18 -04:00
Argenis 01bcba83ad feat(matrix): add file upload handling and voice message support (#3222) 2026-03-11 19:12:44 -04:00
Argenis 4da85cee6e chore(github): update review ownership routing (#3216)
- Add @theonlyhennygod as first-listed code owner on all CODEOWNERS paths
- Add SimianAstronaut7 as maintainer with PR approval authority in docs
- Normalize WORKFLOW_OWNER_LOGINS casing to canonical GitHub logins
2026-03-11 19:11:53 -04:00
Argenis bdc0f325bf feat(matrix): add pin and unpin message support (#3220)
Implement pin_message and unpin_message for the Matrix channel using
the m.room.pinned_events state event. Adds default no-op trait methods
to the Channel trait so other channel implementations are unaffected.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 19:10:08 -04:00
Abdullah Imad 248348bd80 feat(matrix): reaction and threading support (#3219)
Implement add_reaction/remove_reaction for Matrix channel using
ReactionEventContent and redaction. Add threading support via
Relation::Thread in send() and thread_ts extraction from incoming
messages, enabling threaded conversations.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 19:07:49 -04:00
Abdullah Imad bd70c0f45b feat(observer): live tool call notifications (#3221)
Add ChannelNotifyObserver that wraps the observer to forward tool-call
events as real-time threaded messages on messaging channels. Include
tool arguments (truncated) in ToolCallStart events for better
visibility into what tools are doing. Auto-thread final replies when
tools were used.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 19:07:34 -04:00
Abdullah Imad d0edcec1f9 feat(prompt): refresh stale datetime (#3223)
The system prompt is built once at daemon startup and cached. The
"Current Date & Time" section becomes stale immediately. This patch
replaces it with a fresh timestamp every time build_channel_system_prompt
is called (i.e. per incoming message).

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 19:04:23 -04:00
SimianAstronaut7 9423b9c94e Merge pull request #3215 from zeroclaw-labs/feat-readme-fix
docs(readme): remove retired social links
2026-03-11 22:30:30 +00:00
SimianAstronaut7 95da0062de chore: bump version to 0.1.9 for stable release (#3225)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 18:23:38 -04:00
SimianAstronaut7 7634d8d1f8 Merge branch 'master' into feat-readme-fix 2026-03-11 20:50:56 +00:00
JordanTheJet 744620bc34 Merge pull request #3214 from zeroclaw-labs/artifact-enhancement
fix(ci): downgrade action-gh-release to v2.4.2 to fix release finalization
2026-03-11 15:53:44 -04:00
argenis de la rosa f67abd9fc8 docs(readme): remove retired social links
Remove WeChat, Xiaohongshu, and Telegram from the README social badges and aligned locale entry points.
2026-03-11 15:48:43 -04:00
SimianAstronaut7 94a4d72e71 Merge branch 'master' into artifact-enhancement 2026-03-11 19:41:45 +00:00
Aleksandr Prilipko 39353748fa fix(memory): resolve embedding api_key from embedding_provider env var, not default_provider key (#3184)
When embedding_provider differs from default_provider (e.g. default=gemini,
embedding=openai), the caller-supplied api_key belongs to the chat provider.
Passing it to the embedding endpoint causes 401 Unauthorized (gemini key
sent to api.openai.com/v1/embeddings).

Add embedding_provider_env_key() which looks up OPENAI_API_KEY,
OPENROUTER_API_KEY, or COHERE_API_KEY before falling back to the
caller-supplied key. This matches the provider-specific env var resolution
in providers/mod.rs without introducing cross-module coupling.

Also add config_secrets_survive_save_load_roundtrip test: full save→load
cycle with channel credentials (telegram, discord, slack bot_token,
slack app_token) and gateway paired_tokens, verifying that enc2: values
are correctly decrypted by Config::load_or_init(). Regression guard for
issues #3173 and #3175.

Closes #3083

Co-authored-by: ZeroClaw Bot <zeroclaw_bot@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-11 15:39:54 -04:00
Simian Astronaut 7 71f013af49 chore: update action-gh-release to version 2.4.2 in release workflows 2026-03-11 15:39:00 -04:00
Argenis 74f411bd2b Merge pull request #3198 from panviktor/fix/channel-model-switch-resolve-routes
fix(channels): resolve provider from model_routes on /model, preserve context
2026-03-11 13:48:30 -04:00
Argenis 50a65ea0e5 Merge branch 'master' into fix/channel-model-switch-resolve-routes 2026-03-11 13:32:15 -04:00
Argenis 655e5fd56a Merge pull request #3204 from zeroclaw-labs/codex/add-simian-codeowners-master
chore(codeowners): add SimianAstronaut7 to review routing
2026-03-11 12:33:14 -04:00
argenis de la rosa 072a10eff4 chore(codeowners): add SimianAstronaut7 to review routing 2026-03-11 12:32:49 -04:00
SimianAstronaut7 9baa02a40b Merge pull request #3203 from zeroclaw-labs/build-artifact-fix
fix(ci): exclude stale artifacts from release and Docker builds
2026-03-11 16:31:35 +00:00
Simian Astronaut 7 0af0f0344e fix: remove data directory copy from Dockerfile and update artifact download patterns in release workflows 2026-03-11 12:20:54 -04:00
Argenis 9cfc88c38c Merge pull request #3201 from whtiehack/pr/fix-channel-embedding-routes
fix(channels): pass embedding routes to channel memory init
2026-03-11 12:07:53 -04:00
Argenis 87127e2a08 Merge pull request #3200 from whtiehack/pr/fix-strip-tool-result-display
fix(agent): strip prompt-guided tool artifacts from visible replies
2026-03-11 12:07:08 -04:00
smallwhite 7cacdef2d1 fix(channels): pass embedding routes to channel memory init 2026-03-11 23:57:46 +08:00
SimianAstronaut7 9469b5bdbe Merge pull request #3199 from zeroclaw-labs/e2e-testing
test: restructure test suite into five-level taxonomy. cleaned up + documented
2026-03-11 15:54:38 +00:00
Simian Astronaut 7 ecbe5e2c68 docs(testing): note both co-located and separate unit test patterns
The codebase uses both #[cfg(test)] blocks in source files (194 files)
and separate tests.rs files (src/agent/tests.rs). Document both.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 11:52:09 -04:00
Simian Astronaut 7 86a2fc2594 docs(testing): clarify unit tests are co-located in source files
Unit tests use #[cfg(test)] mod tests blocks inside src/**/*.rs,
not separate tests.rs files.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 11:50:47 -04:00
smallwhite 1c2a49459e fix(agent): strip prompt-guided tool artifacts from visible replies 2026-03-11 23:50:39 +08:00
SimianAstronaut7 e1f37a307e Merge branch 'master' into e2e-testing 2026-03-11 15:26:19 +00:00
Argenis fd40eeb408 Merge pull request #3182 from rareba/fix/cargo-fmt-config-imports
style: fix cargo fmt violations blocking all PR CI
2026-03-11 11:10:49 -04:00
panviktor 0ee3b6d617 fix(channels): resolve provider from model_routes on /model, preserve context
- `/model <name>` now auto-resolves provider from configured model_routes
  by matching model name or hint, fixing 404 when switching to models on
  different providers (e.g. `/model kimi-k2.5` with anthropic default)
- Conversation history is no longer cleared on `/model` or `/models` —
  users can explicitly reset via `/new`
- Matrix channel now supports `/model`, `/models`, and `/new` commands
- `/model` (no args) lists configured model routes with hints

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 16:10:42 +01:00
Simian Astronaut 7 d0781f31a5 test: restructure test suite into five-level taxonomy
Reorganize the flat tests/ directory into a structured taxonomy:
- component/ (174 tests): single-subsystem tests
- integration/ (65 tests): multi-component wiring tests
- system/ (5 tests): full-stack with real SQLite memory
- live/ (5 tests, #[ignore]): real external API tests
- manual/: shell/Python scripts for human-driven testing
- support/: shared mock infrastructure (MockProvider, EchoTool, etc.)
- fixtures/: JSON trace fixtures for declarative LLM replay

Deduplicates ~180 lines of identical mock code from agent_e2e.rs and
agent_loop_robustness.rs into shared support modules. Adds new component
tests for gateway (HMAC signature verification) and security (config
defaults, TOML round-trips). Adds system-level tests exercising the
full agent turn cycle with real SQLite memory.

Updates Cargo.toml with [[test]] entries, dev/ci.sh with level-specific
commands, and docs/contributing/testing.md with a comprehensive testing
guide.

Zero impact on src/.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 11:03:33 -04:00
SimianAstronaut7 a3bc7a04dd Merge pull request #3196 from zeroclaw-labs/hardening-gateway-slack-tts
fix(multi): harden gateway auth, Slack file handling, and TTS subprocess safety
2026-03-11 14:32:40 +00:00
Simian Astronaut 7 5901f70dc0 fix(clippy): box-pin large onboard wizard futures
The onboard wizard futures exceed clippy's large_futures threshold
(16KB+). Wrap in Box::pin to heap-allocate and fix the lint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 10:21:46 -04:00
Simian Astronaut 7 026158557c chore: fix rustfmt violations from TTS provider merge
Import ordering in config/mod.rs and line-wrapping in config/schema.rs
were left unformatted by PR #2994. Run cargo fmt to fix.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 10:09:06 -04:00
Simian Astronaut 7 162a2b65a5 fix(multi): harden Edge TTS path validation, add subprocess timeout, restore Slack symlink protection
Edge TTS: reject binary_path containing path separators to prevent
arbitrary executable paths bypassing the basename allowlist. Wrap
subprocess execution in tokio::time::timeout (60s) matching HTTP
providers.

Slack: restore symlink_metadata check before rename to prevent
symlink-following attacks on attachment output paths.

Docs: add TTS module misplacement note to refactor-candidates.md —
tts.rs belongs in src/tools/, not src/channels/.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 10:05:52 -04:00
Simian Astronaut 7 e6bfb1ebee fix(slack): add HTTP timeouts, proper jitter, and cache bounds
HTTP clients had no timeouts and could hang forever; add 30s total and
10s connect timeouts. Replace clock-nanos-based jitter with rand::random
for proper randomness. Add a 1000-entry cap to the user display name
cache with expired-entry pruning. Fix truncate_text to avoid scanning
the full string twice when checking for truncation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 08:58:20 -04:00
Simian Astronaut 7 9a6de3b204 fix(tts): move Google API key to header and sanitize provider inputs
Google TTS was passing the API key as a URL query parameter, which can
appear in logs and proxy access records. Move it to the x-goog-api-key
header instead. Add input validation for ElevenLabs voice IDs (reject
non-alphanumeric/dash/underscore characters) and restrict Edge TTS
binary_path to allowed basenames (edge-tts, edge-playback).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 08:58:13 -04:00
Simian Astronaut 7 f894bc4140 fix(slack): only send bearer token on first redirect hop
Slack private file fetching was sending the bot token on every redirect
hop. Since Slack CDN redirects use pre-signed URLs, sending the bearer
token to CDN hosts is unnecessary credential exposure.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 08:58:05 -04:00
Simian Astronaut 7 a1c65558d8 fix(gateway): restrict admin endpoints to localhost and replace process::exit with graceful shutdown
Admin endpoints (/admin/shutdown, /admin/paircode, /admin/paircode/new) were
completely unauthenticated, allowing any network client to shut down the gateway
or read/generate pairing codes. Add require_localhost() guard that returns 403
for non-loopback IPs.

Replace std::process::exit(0) in shutdown handler with a tokio watch channel
for graceful shutdown, allowing proper destructor cleanup and connection
draining. Replace the 500ms sleep race in the restart command with a poll loop
that waits for the port to actually become free.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 08:57:34 -04:00
Giulio V ed7c191fcd style: fix cargo fmt + clippy violations from TTS merge
- config/mod.rs: reorder TTS imports alphabetically (rustfmt)
- config/schema.rs: collapse single-arg encrypt/decrypt calls (rustfmt)
- main.rs: Box::pin large onboard futures to fix clippy::large_futures

These violations were introduced by the TTS providers merge (#2994)
and are blocking CI lint on all open PRs targeting master.
2026-03-11 11:41:11 +01:00
Giulio V 69abd217d7 style: fix cargo fmt violations in config module
The TTS providers merge (#2994) introduced import ordering and
line-wrapping that doesn't match rustfmt output, causing lint
failures on all open PRs targeting master.
2026-03-11 11:30:56 +01:00
Argenis f2035819c2 Merge pull request #2994 from rareba/feature/tts-providers
feat(tts): add multi-provider TTS system
2026-03-11 05:23:50 -04:00
Argenis 46d68fc8ba Merge pull request #3067 from kunalk16/fix-honor-default-temperature-agent-command
fix(config): honor default_temperature config while running "zeroclaw agent" without temperature parameter
2026-03-11 04:50:31 -04:00
Kunal Karmakar 06926a189d Merge branch 'master' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-11 08:36:35 +00:00
dependabot[bot] 8f68afb70c chore(deps): bump rust in the docker-all group
Bumps the docker-all group with 1 update: rust.


Updates `rust` from 1.93-slim to 1.94-slim

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-slim
  dependency-type: direct:production
  dependency-group: docker-all
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-11 04:33:19 -04:00
Sid Jain 6016491985 fix(slack): harden redirect chain and auth health checks 2026-03-11 04:32:56 -04:00
Sid Jain bf11b7b1a0 fix(slack): resolve clippy warnings in URL and tests 2026-03-11 04:32:56 -04:00
Sid Jain 436619b015 chore(slack): satisfy rustfmt wrap in MIME warning 2026-03-11 04:32:56 -04:00
Sid Jain c6233b066c Revert "chore(docker): make cargo profile configurable for builds"
This reverts commit 942e6cb97452b3330827ea485473fc176e5bf103.
2026-03-11 04:32:56 -04:00
Sid Jain b33af6894e fix(slack): redact private URLs and harden attachment writes 2026-03-11 04:32:56 -04:00
Sid Jain f98a18502f chore(docker): make cargo profile configurable for builds 2026-03-11 04:32:56 -04:00
Sid Jain 14de5e527e fix(slack): preserve auth and validate image bytes 2026-03-11 04:32:56 -04:00
Sid Jain 7b7e08dc21 fix(slack): align cron workspace dir and socket retry backoff 2026-03-11 04:32:56 -04:00
Sid Jain 526d63fd75 feat: add slack file reading capability to zeroclaw 2026-03-11 04:32:56 -04:00
dependabot[bot] e20e0f7cb0 chore(deps): bump the rust-all group across 1 directory with 10 updates
Bumps the rust-all group with 10 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [tokio](https://github.com/tokio-rs/tokio) | `1.49.0` | `1.50.0` |
| [toml](https://github.com/toml-rs/toml) | `1.0.3+spec-1.1.0` | `1.0.6+spec-1.1.0` |
| [shellexpand](https://gitlab.com/ijackson/rust-shellexpand) | `3.1.1` | `3.1.2` |
| [fantoccini](https://github.com/jonhoo/fantoccini) | `0.22.0` | `0.22.1` |
| [uuid](https://github.com/uuid-rs/uuid) | `1.21.0` | `1.22.0` |
| [chrono](https://github.com/chronotope/chrono) | `0.4.43` | `0.4.44` |
| [which](https://github.com/harryfei/which-rs) | `8.0.0` | `8.0.2` |
| [rustls](https://github.com/rustls/rustls) | `0.23.36` | `0.23.37` |
| [libc](https://github.com/rust-lang/libc) | `0.2.182` | `0.2.183` |
| [tempfile](https://github.com/Stebalien/tempfile) | `3.25.0` | `3.26.0` |



Updates `tokio` from 1.49.0 to 1.50.0
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.49.0...tokio-1.50.0)

Updates `toml` from 1.0.3+spec-1.1.0 to 1.0.6+spec-1.1.0
- [Commits](https://github.com/toml-rs/toml/compare/toml-v1.0.3...toml-v1.0.6)

Updates `shellexpand` from 3.1.1 to 3.1.2
- [Commits](https://gitlab.com/ijackson/rust-shellexpand/compare/shellexpand-3.1.1...shellexpand-3.1.2)

Updates `fantoccini` from 0.22.0 to 0.22.1
- [Commits](https://github.com/jonhoo/fantoccini/compare/v0.22.0...v0.22.1)

Updates `uuid` from 1.21.0 to 1.22.0
- [Release notes](https://github.com/uuid-rs/uuid/releases)
- [Commits](https://github.com/uuid-rs/uuid/compare/v1.21.0...v1.22.0)

Updates `chrono` from 0.4.43 to 0.4.44
- [Release notes](https://github.com/chronotope/chrono/releases)
- [Changelog](https://github.com/chronotope/chrono/blob/main/CHANGELOG.md)
- [Commits](https://github.com/chronotope/chrono/compare/v0.4.43...v0.4.44)

Updates `which` from 8.0.0 to 8.0.2
- [Release notes](https://github.com/harryfei/which-rs/releases)
- [Changelog](https://github.com/harryfei/which-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/harryfei/which-rs/compare/8.0.0...8.0.2)

Updates `rustls` from 0.23.36 to 0.23.37
- [Release notes](https://github.com/rustls/rustls/releases)
- [Changelog](https://github.com/rustls/rustls/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rustls/rustls/compare/v/0.23.36...v/0.23.37)

Updates `libc` from 0.2.182 to 0.2.183
- [Release notes](https://github.com/rust-lang/libc/releases)
- [Changelog](https://github.com/rust-lang/libc/blob/0.2.183/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/libc/compare/0.2.182...0.2.183)

Updates `tempfile` from 3.25.0 to 3.26.0
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/commits/v3.26.0)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.50.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: rust-all
- dependency-name: toml
  dependency-version: 1.0.6+spec-1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: shellexpand
  dependency-version: 3.1.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: fantoccini
  dependency-version: 0.22.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: uuid
  dependency-version: 1.22.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: rust-all
- dependency-name: chrono
  dependency-version: 0.4.44
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: which
  dependency-version: 8.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: rustls
  dependency-version: 0.23.37
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: libc
  dependency-version: 0.2.183
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: rust-all
- dependency-name: tempfile
  dependency-version: 3.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: rust-all
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-11 04:32:15 -04:00
曾文锋0668000834 e3cc40c64c feat(gateway): add --new flag to get-paircode for non-disruptive pairing code generation
- Add --new flag to GetPaircode command in src/lib.rs
- Update main.rs to handle GetPaircode { new } parameter
- Add /admin/paircode/new POST endpoint in gateway/mod.rs
- Enhance documentation for constant_time_eq security function

Refs: #3015
2026-03-11 04:30:58 -04:00
曾文锋0668000834 9d182c6dd8 fix(security): restore constant-time comparison bitwise operator
The bitwise & operator is intentional in constant_time_eq() to prevent
timing side-channel attacks. Both comparisons must always execute to
ensure constant-time behavior regardless of the first comparison result.

- Revert logical && back to bitwise &
- Add #[allow(clippy::needless_bitwise_bool)] annotation
- Add explanatory comment documenting the intentional use
2026-03-11 04:30:58 -04:00
Darren.Zeng d7d114eae7 Update package-lock.json 2026-03-11 04:30:58 -04:00
曾文锋0668000834 b1dfa192b8 fix(gateway): address CodeRabbit review feedback
- Add security warning for 0.0.0.0 binding in help text
- Implement proper gateway shutdown before restart via /admin/shutdown endpoint
- Fetch live pairing code from running gateway via /admin/paircode endpoint
- Extract duplicate code into helper functions
- Fix clippy warnings
2026-03-11 04:30:58 -04:00
曾文锋0668000834 7ed1bdc104 fix(gateway): address CodeRabbit review issues
- Add security warning for 0.0.0.0 binding in GatewayCommands::Start help
- Implement proper gateway shutdown via /admin/shutdown endpoint for Restart command
- Add /admin/paircode endpoint and update GetPaircode to fetch live pairing code
- Extract helper functions: resolve_gateway_addr, log_gateway_start, shutdown_gateway, fetch_paircode

Fixes: #3101
2026-03-11 04:30:58 -04:00
曾文锋0668000834 7f6bd651f7 fix(gateway): address CodeRabbit review feedback for PR #3101
- Fix Critical: Split illegal or-pattern (Some(...) | None) into separate match arms
- Fix Major: Implement restart command with graceful shutdown check
- Fix Major: Improve get-paircode to check gateway status and provide clear instructions
- Fix Minor: Update help text to document public-bind precondition

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-11 04:30:58 -04:00
曾文锋0668000834 491ac601e6 feat(gateway): add restart and get-paircode subcommands
Add GatewayCommands enum with three subcommands:
- start: Start the gateway server (default behavior preserved)
- restart: Restart the gateway server
- get-paircode: Show current pairing status without restarting

This improves gateway management by allowing users to:
1. Restart gateway without manual stop/start
2. Check pairing status without disrupting running gateway

Closes #3014
Closes #3015

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-11 04:30:58 -04:00
dependabot[bot] b4da085f0c chore(deps): bump actions/download-artifact from 4 to 8
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 8.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v8)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-11 04:30:08 -04:00
Kunal Karmakar 0829bb92df Merge branch 'master' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-11 08:28:23 +00:00
曾文锋0668000834 7950f51bb9 fix(docker): add missing COPY data/ directive
Fixes #3063

The Dockerfile was missing the COPY directive for the data/ directory,
which contains security/attack-corpus-v1.jsonl required by the
semantic guard feature via include_str!() at compile time.

This caused Docker builds to fail with:
  error: couldn't read src/security/../../data/security/attack-corpus-v1.jsonl

Risk: low (build fix only, no runtime behavior change)
2026-03-11 04:23:21 -04:00
Jiecheng Wu 79cff39aa5 fix(docker): enable BuildKit and write config.toml via printf
- Dockerfile: replace heredoc with printf for config.toml to avoid
  Docker parse error (unknown instruction WORKSPACE_DIR)
- install.sh: set DOCKER_BUILDKIT=1 when building image so RUN --mount works
2026-03-11 04:21:44 -04:00
曾文锋0668000834 e94dafbbfb chore(examples): add example configuration 2026-03-11 04:18:55 -04:00
曾文锋0668000834 99460c3fff chore(editor): add .editorconfig 2026-03-11 04:18:41 -04:00
曾文锋0668000834 8e3775362e chore(git): add .gitattributes for line endings 2026-03-11 04:18:31 -04:00
曾文锋0668000834 1ef1b5d02d docs(changelog): add changelog 2026-03-11 04:18:16 -04:00
Kunal Karmakar e2a3907d35 Merge branch 'master' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-11 08:08:22 +00:00
guan 980ad7ebc9 fix(docs): correct broken path references in contributing docs
Several files in docs/contributing/ referenced other docs files using
incorrect paths like "docs/pr-workflow.md" when the files are actually
in the same directory (docs/contributing/).

Changes:
- docs/contributing/README.md: fix Suggested Reading Order paths
- docs/contributing/pr-workflow.md: fix link text references
- docs/contributing/reviewer-playbook.md: fix link text reference
- docs/vi/contributing/README.md: fix Vietnamese translation paths
- docs/i18n/vi/contributing/README.md: fix Vietnamese i18n paths
- docs/setup-guides/zai-glm-setup.md: fix custom-providers.md link

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 03:31:14 -04:00
Rick Morgans da0a592163 fix(imessage): parse attributedBody when text column is NULL
Modern macOS (Ventura+) stores iMessage content in the attributedBody
column as a binary typedstream blob rather than the text column. The
existing SQL filter `AND m.text IS NOT NULL` silently dropped all
incoming messages on affected systems.

Add a length-prefix extractor for the typedstream format and fall back
to attributedBody when text is NULL or empty. Includes real captured
blob fixtures and 14 new parser/integration tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-11 03:25:05 -04:00
Kunal Karmakar a6cd877ac8 Merge branch 'master' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-11 02:13:12 +00:00
SimianAstronaut7 138ff4249c Merge pull request #3147 from zeroclaw-labs/simianastronaut7/claude-skill-for-general-use
feat(skills): add zeroclaw operational skill for CLI and REST API usage
2026-03-11 00:26:04 +00:00
SimianAstronaut7 ff2da533a5 Merge branch 'master' into simianastronaut7/claude-skill-for-general-use 2026-03-10 23:49:34 +00:00
SimianAstronaut7 86ca1ba2b6 Merge pull request #3142 from zeroclaw-labs/simianastronaut7/api-key-preflight
feat(provider): add API key prefix pre-flight validation
2026-03-10 23:49:21 +00:00
SimianAstronaut7 f16a71826c Merge branch 'master' into simianastronaut7/api-key-preflight 2026-03-10 23:41:23 +00:00
simianastronaut 0c5b41b288 fix(tests): remove nonexistent examples dir from regression scan
The `examples/` directory no longer exists, causing
`source_does_not_use_legacy_reply_to_field` to panic in CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 19:33:36 -04:00
simianastronaut f127ca8c93 feat(skills): add zeroclaw operational skill for CLI and REST API usage
Adds a Claude Code skill that helps users operate their ZeroClaw instance
through both the CLI and gateway API. The skill provides adaptive expertise
(adjusts detail level based on user signals), discovery logic (finds the
binary, checks gateway status), and covers all core operations: agent chat,
memory, cron, providers, config, SSE events, estop, and troubleshooting.

Also adds gitignore entry for skill eval workspaces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 19:05:30 -04:00
simianastronaut 2e86d83ed7 feat(api): add API key prefix validation to prevent cross-provider mismatches
Introduced a new function `check_api_key_prefix` to validate API key prefixes against their associated providers. This helps catch mismatches early in the process. Added unit tests to ensure correct functionality for various scenarios, including known and unknown key formats. This enhancement improves error handling and user guidance when incorrect provider keys are used.
2026-03-10 18:00:38 -04:00
SimianAstronaut7 9b9b85217c Merge pull request #3140 from zeroclaw-labs/simianastronaut7/readme-contributors-grid
docs: add dynamic contributor badge and contributors grid to README
2026-03-10 20:49:17 +00:00
simianastronaut 0e006b91e4 docs: update contributor badge in README and add contributors section 2026-03-10 16:39:02 -04:00
SimianAstronaut7 d93d7dab84 Merge pull request #3139 from zeroclaw-labs/simianastronaut7/build-fix
fix(build): ensure web/dist directory exists at compile time
2026-03-10 19:38:50 +00:00
simianastronaut ee7048502c fix(build): ensure web/dist directory exists at compile time
Add build.rs script that creates web/dist/ if missing, preventing
build failures when the web dashboard hasn't been pre-built.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 15:35:28 -04:00
SimianAstronaut7 2c203571d0 Merge pull request #3138 from zeroclaw-labs/mikeboensel/examples-refactored
docs: consolidate extension examples into contributing guide
2026-03-10 19:15:39 +00:00
simianastronaut 9781f07a98 docs: add extension examples and update references in contributing materials
- Introduced a new document, `extension-examples.md`, providing code examples for implementing custom extensions in ZeroClaw.
- Updated `SUMMARY.md` and `README.md` to include links to the new examples.
- Modified `change-playbooks.md` to reference the new examples for better guidance on extension implementation.
- Removed outdated example files for custom channels, memory, providers, and tools to streamline documentation.
2026-03-10 15:11:28 -04:00
Kunal Karmakar 4600469717 Merge branch 'fix-honor-default-temperature-agent-command' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-10 08:29:56 +00:00
Kunal Karmakar b2857cb836 Fix clippy linting 2026-03-10 08:29:45 +00:00
Kunal Karmakar 76d54193f1 Merge branch 'master' of https://github.com/kunalk16/zeroclaw into fix-honor-default-temperature-agent-command 2026-03-10 07:19:03 +00:00
SimianAstronaut7 046040d535 Merge pull request #3099 from zeroclaw-labs/cicd-best-practices
chore(ci,docs): SHA-pin Actions, scope permissions, add gate job, and localized READMEs
2026-03-10 02:47:59 -04:00
Simian Astronaut 7 283385624e chore(ci): rename gate job for clarity
Updated the name of the CI gate job from "Gate" to "CI Required Gate" to enhance clarity and better reflect its purpose in the workflow.
2026-03-10 02:47:07 -04:00
Simian Astronaut 7 f73f39d33d fix(ci): replace taiki-e/install-action with cargo install
The taiki-e/install-action is likely not on the org/repo Actions
allowlist, causing startup_failure for the entire workflow. Revert
to cargo install for cargo-audit and cargo-deny.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 02:37:22 -04:00
Simian Astronaut 7 d07d314fd0 chore: retrigger CI after startup_failure 2026-03-10 02:34:44 -04:00
Simian Astronaut 7 f7b057a743 Merge remote-tracking branch 'origin/master' into cicd-best-practices 2026-03-10 02:24:31 -04:00
SimianAstronaut7 276e3f67ca Merge pull request #3087 from zeroclaw-labs/docs/readme-update
docs: add localized README files for all 31 supported languages
2026-03-10 02:20:27 -04:00
SimianAstronaut7 0a9acf8150 Merge branch 'master' into docs/readme-update 2026-03-10 02:19:27 -04:00
SimianAstronaut7 535995f2ab Merge pull request #3097 from zeroclaw-labs/fix/release-web-build-step
fix(ci): add web dashboard build step to release workflows
2026-03-10 02:18:35 -04:00
Simian Astronaut 7 863d731b92 fix(docs): address CodeRabbit review feedback on localized READMEs
- README.ar.md: translate hero performance callout from English to Arabic
- README.nl.md: fix "Implantatie" → "Imitatie" in anti-impersonation heading
- README.tr.md: fix ambiguous security wording for native runtime sandbox
- README.uk.md: update "проект" → "проєкт" (modern Ukrainian orthography)
- README.fr/vi/ja/ru/zh-CN.md: update 6-language selector to full 31-language
  selector matching README.md for navigation parity
- README.de.md: add note clarifying docs links route to English sources
- README.el.md: add note marking this as a condensed translation with pointer
  to full English README

Addresses review comments from PR #3087.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 02:16:37 -04:00
Simian Astronaut 7 f19d4951b6 refactor(ci): extract web build into shared job to avoid 4x redundancy
Instead of each build matrix job independently installing Node.js and
running npm ci/build, extract web dashboard build into a single job
that uploads web/dist/ as an artifact. Build jobs download it before
cargo build. Reduces total CI time by ~3 Node installs + builds.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 02:09:30 -04:00
Simian Astronaut 7 7399894cc8 fix(ci): add web dashboard build step to release workflows
RustEmbed requires web/dist/ to exist at compile time. The PR checks
workflow used a placeholder mkdir, but release workflows need real
built assets since they produce the distributed binary. Add Node.js
setup and npm ci/build before cargo build in all three release/build
workflows.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 02:03:15 -04:00
SimianAstronaut7 6e7c13a864 Merge pull request #3096 from zeroclaw-labs/project-cleanup
chore: project cleanup — restructure docs, rename firmware, update CI workflows

- Base branch target (`master` for all contributions): `master`
- Problem: Repository structure had grown organically — docs were flat, firmware directories had redundant prefixes, CI workflow names were unclear, and stale build artifacts (`web/dist`) were tracked.
- Why it matters: Cleaner repo structure improves contributor onboarding, reduces confusion, and establishes consistent naming conventions across the project.
- What changed: Restructured `docs/` into topic-based subdirectories (setup-guides, reference, ops, security, hardware, contributing, maintainers); renamed firmware crate directories to drop `zeroclaw-` prefix; renamed CI workflow files for clarity; migrated GitHub URLs from `theonlyhennygod` to `zeroclaw-labs`; added `.vscode` config; added Claude skills (github-issue, github-pr, skill-creator); slimmed down AGENTS.md/CLAUDE.md; unified install scripts; removed tracked `web/dist`; relocated test scripts.
- What did **not** change (scope boundary): No runtime behavior, no Rust `src/` logic changes beyond path/URL reference updates in peripherals and providers.
2026-03-10 01:54:23 -04:00
Simian Astronaut 7 7ef9d8a7b5 Addressed clippy lint issues 2026-03-10 01:48:19 -04:00
Simian Astronaut 7 8d48472c9e fix(ci): add gate job and use pre-built security tools
Add composite Gate job to checks-on-pr.yml so branch protection
only needs a single required check. Replace cargo-install with
taiki-e/install-action for cargo-audit and cargo-deny to cut
minutes off every PR run. Mark CI/CD P1/P2 findings as resolved
in refactor-candidates.md.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 01:10:19 -04:00
Simian Astronaut 7 d1fffc3b74 fix(ci): scope release workflow permissions per-job
Narrow workflow-level permissions to contents:read and grant
write access only to the specific jobs that need it (publish
gets contents:write, docker gets packages:write). Reduces blast
radius if a build step is compromised (P1 finding).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 01:09:51 -04:00
Simian Astronaut 7 8538c4105a fix(ci): SHA-pin all third-party GitHub Actions
Replace mutable version tags with immutable commit SHAs to prevent
tag-hijacking supply chain attacks (P1 finding).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 01:09:32 -04:00
Simian Astronaut 7 08d6959e0d security: Bump quinn-proto 0.11.13 → 0.11.14 to resolve high-severity (8.7)
denial-of-service vulnerability in Quinn endpoints. Lockfile-only
  change; no code modifications.
2026-03-10 00:55:30 -04:00
Simian Astronaut 7 ad8e1e65e0 fix(ci): resolve lint and build failures in PR checks
Apply cargo fmt to fix formatting diffs in openrouter.rs and serial.rs.
Add web/dist placeholder step to lint, test, and build jobs so
RustEmbed compiles without the gitignored frontend assets.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 00:36:24 -04:00
Simian Astronaut 7 c204d72cc4 TODOs 2026-03-10 00:19:18 -04:00
Simian Astronaut 7 852b67fff3 chore: add VSCode configuration files for project
- Introduced extensions.json to recommend essential Rust-related extensions.
- Added launch.json for debugging configurations with LLDB for various components.
- Created settings.json to configure editor preferences and Rust analyzer settings.
- Included tasks.json to define build, lint, test, and CI tasks for the project.
2026-03-10 00:19:12 -04:00
Simian Astronaut 7 df4e11bd1d ci(workflows): rename workflow files and add lint + security jobs
- Rename all 4 workflow files to match trigger and purpose
- Expand PR quality gate with dedicated lint and security audit jobs
- Align workflow display names, concurrency groups, and all doc references
2026-03-10 00:17:54 -04:00
Simian Astronaut 7 991a1ea9cc ignore coverage report 2026-03-09 23:51:44 -04:00
Simian Astronaut 7 97ebfe0549 minor typo 2026-03-09 23:51:26 -04:00
Simian Astronaut 7 ea61491f2a chore: migrate GitHub URLs from theonlyhennygod to zeroclaw-labs
- Replace all occurrences of theonlyhennygod/zeroclaw with zeroclaw-labs/zeroclaw
- Affects root metadata, source code headers, hardware setup docs, and crate docs
2026-03-09 23:49:56 -04:00
Simian Astronaut 7 1a78dcf447 refactor: update firmware path references to match directory renames
- Align src/peripherals/ and docs/hardware/ with the firmware directory renames
- Covers both compiled references (include_str!, constants) and documentation
2026-03-09 23:49:20 -04:00
Simian Astronaut 7 620e4adee5 refactor(firmware): update crate names and paths to match directory renames
- Align Cargo.toml package names and Cargo.lock with directory renames from previous commit
- Update build and flash instructions in firmware docs
2026-03-09 23:47:45 -04:00
Simian Astronaut 7 631e7f61b5 refactor(firmware): rename directories to drop zeroclaw- prefix
- zeroclaw-arduino → arduino
- zeroclaw-esp32 → esp32
- zeroclaw-esp32-ui → esp32-ui
- zeroclaw-nucleo → nucleo
- zeroclaw-uno-q-bridge → uno-q-bridge

Pure file moves — no content changes. Follow-up commits update
internal references (Cargo.toml names, include_str! paths, docs).
2026-03-09 23:46:17 -04:00
Simian Astronaut 7 36bc6c4aee refactor: Agents file done according to best practices (slim, light, project specific, avoids things the models already know) 2026-03-09 23:16:48 -04:00
Simian Astronaut 7 67b0ff0ee9 docs: update all internal links to match topic-based directory layout
- Bulk-fix internal links broken by docs/ reorganization into
  topic-based subdirectories
- Cover root READMEs (en/fr/ja/ru/vi/zh-CN), docs hubs, SUMMARY
  TOCs, and all subdirectory docs
- Fix ancillary issues: GitHub URL org name, LICENSE path, Vietnamese
  shim depths, installer rename references

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 23:09:09 -04:00
Simian Astronaut 7 c99a046b1f Merge remote-tracking branch 'origin/master' into project-cleanup 2026-03-09 22:40:46 -04:00
SimianAstronaut7 d17e9586ae Merge pull request #2976 from zeroclaw-labs/cicd-improvements
chore: update .gitignore, CODEOWNERS, and dependabot configuration + Skills for Standardizing Github work
2026-03-09 22:38:02 -04:00
Simian Astronaut 7 9f2655dc4d refactor:
- Move `recompute_contributor_tiers.sh` to automate the labeling of contributors based on their merged PR counts.
- Removed web/dist from git tracking (mistake)
2026-03-09 22:33:59 -04:00
Simian Astronaut 7 0b9a975da5 docs: restructure docs/ into topic-based directory layout
- Move ~60 flat files from docs/ root into category subdirs:
  security/, hardware/, ops/, reference/{api,cli,sop},
  setup-guides/, contributing/, maintainers/, assets/
- Absorb and remove datasheets/, operations/, getting-started/,
  project/, structure/, sop/ directories
- Move testing-telegram.md to tests/telegram/ alongside test scripts
- Merge FutureRefactorings.md into refactor-candidates.md
2026-03-09 22:25:20 -04:00
Simian Astronaut 7 507c93fcf5 chore(tests): relocate test scripts and testing docs
Move RUN_TESTS.md and TESTING_TELEGRAM.md to docs/contributing/, and move
quick_test.sh, test_telegram_integration.sh, generate_test_messages.py to
tests/telegram/. Move scripts/test_dockerignore.sh to tests/. Update internal
cross-references to reflect new paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:44:20 -04:00
Simian Astronaut 7 a6e5a6ffda chore(docs): relocate assets and project docs to docs/ subdirectories
Move zeroclaw.png and zero-claw.jpeg to docs/assets/, CLA.md to
docs/contributing/cla.md, and TRADEMARK.md to docs/project/trademark.md.
Update all cross-references in root README files (en, fr, ja, ru, vi, zh-CN)
to point to new locations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 21:44:08 -04:00
SimianAstronaut7 cf21cf093a Merge branch 'master' into cicd-improvements 2026-03-09 21:35:24 -04:00
Simian Astronaut 7 b454a9d301 Removed unused file 2026-03-09 20:33:34 -04:00
argenis de la rosa c8df83dd17 docs: add localized README files for all 31 supported languages
This commit adds complete README translations for all 31 languages supported
by the ZeroClaw web dashboard, enabling users to access documentation in
their native language.

New README files added:
- README.ko.md (Korean - 한국어)
- README.tl.md (Tagalog)
- README.es.md (Spanish - Español)
- README.pt.md (Portuguese - Português)
- README.it.md (Italian - Italiano)
- README.de.md (German - Deutsch)
- README.ar.md (Arabic - العربية) [RTL]
- README.hi.md (Hindi - हिन्दी)
- README.bn.md (Bengali - বাংলা)
- README.he.md (Hebrew - עברית) [RTL]
- README.pl.md (Polish - Polski)
- README.cs.md (Czech - Čeština)
- README.nl.md (Dutch - Nederlands)
- README.tr.md (Turkish - Türkçe)
- README.uk.md (Ukrainian - Українська)
- README.id.md (Indonesian - Bahasa Indonesia)
- README.th.md (Thai - ไทย)
- README.ur.md (Urdu - اردو) [RTL]
- README.ro.md (Romanian - Română)
- README.sv.md (Swedish - Svenska)
- README.el.md (Greek - Ελληνικά)
- README.hu.md (Hungarian - Magyar)
- README.fi.md (Finnish - Suomi)
- README.da.md (Danish - Dansk)
- README.nb.md (Norwegian Bokmål - Norsk)

Updated README.md with complete language selector links for all 31 languages.
RTL support added for Arabic, Hebrew, and Urdu.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-09 18:01:00 -04:00
Kunal Karmakar 225edecc3d Reuse default temperature 2026-03-09 14:54:29 +00:00
Kunal Karmakar 465b2ea6af Fix temperature range 2026-03-09 14:31:47 +00:00
Kunal Karmakar e70ca52dc4 Update wording as per review comment 2026-03-09 14:07:13 +00:00
Kunal Karmakar 45d2368730 Default of 0.7 2026-03-09 13:52:33 +00:00
Kunal Karmakar 026f2609f9 Support config without default_temperature 2026-03-09 13:28:08 +00:00
Kunal Karmakar a6cf7d4015 Honor default_temperature from config.toml for zeroclaw agent calls 2026-03-09 10:31:43 +00:00
Giulio V 48c7414bb3 feat(tts): add multi-provider TTS system (OpenAI, ElevenLabs, Google, Edge TTS)
Adds pluggable Text-to-Speech subsystem with TtsProvider trait,
TtsManager for provider selection, and per-provider config structs.
Includes secret encryption for TTS API keys.
2026-03-09 09:03:44 +01:00
SimianAstronaut7 f7fefd4b6c Merge pull request #2954 from antonvice/fix/unused-imports-and-peripherals
fix: resolve unused import warnings
2026-03-07 22:33:12 -05:00
SimianAstronaut7 2d359c1f74 Merge branch 'master' into fix/unused-imports-and-peripherals 2026-03-07 22:19:17 -05:00
simianastronaut 8439b40145 Returning file to prior state 2026-03-07 22:05:49 -05:00
simianastronaut 57065b07a3 PR creation/update skill 2026-03-07 21:54:16 -05:00
simianastronaut a7e295c966 Skill for making PRs in the proper format 2026-03-07 21:48:58 -05:00
simianastronaut d4caba0967 Removing extraneous required fields from Github PRs 2026-03-07 21:39:31 -05:00
simianastronaut 8c0375a9ba Skill Creator (from anthropic) 2026-03-07 21:39:10 -05:00
simianastronaut fa2faf408d chore: update .gitignore, CODEOWNERS, and dependabot configuration
- Add .zeroclaw/* to .gitignore to exclude ZeroClaw files from version control.
- Update CODEOWNERS to include @SimianAstronaut7 as a maintainer alongside @jordanthejet.
- Change dependabot target branch from dev to master for all update configurations.
- Revise master-branch-flow documentation to clarify active workflows and triggers.
2026-03-07 21:05:23 -05:00
Argenis a6102f8dd6 Merge pull request #2928 from zeroclaw-labs/chore/master-branch-model
chore: migrate to single master branch model and update maintainers
2026-03-07 11:03:16 -05:00
jordanthejet b4d619dd2b fix: remove deprecated macos-13 x86_64-apple-darwin target from release pipelines
The macos-13 runner is deprecated by GitHub Actions, causing the
x86_64-apple-darwin build to instantly cancel with no runner assigned.
This cascades to skip publish and docker jobs since they depend on all
matrix builds succeeding.

Intel Macs have been EOL since 2022; aarch64-apple-darwin via macos-14
covers all current macOS users (Rosetta handles x86_64 if needed).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-07 10:40:46 -05:00
Anton Vice e098c6d242 fix: resolve unused import warnings in channels and peripherals 2026-03-07 09:34:46 -05:00
jordanthejet 5dfe3372f5 fix: sync Vietnamese release-process with canonical English version
- Add missing Homebrew Core formula section (step 6)
- Add pub-homebrew-core.yml to workflow contract listing
- Update GHCR tag verification to include SHA tag detail

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 18:18:56 -05:00
jordanthejet ce6349741b fix: sync Vietnamese ci-map with canonical English version
- Add missing Rust-gate parity line to CI merge-blocking section
- Add Sec Vorpal Reviewdog and Pub Homebrew Core workflow entries
- Add Pub Homebrew Core and Sec Vorpal Reviewdog to trigger map
- Update Dependabot trigger wording to match English (PRs target master)
- Add Homebrew formula triage entry and fix numbering

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 18:13:40 -05:00
jordanthejet e3612880f3 fix: address remaining CodeRabbit review comments on PR #2928
- Fix Docker trigger semantics in Vietnamese ci-map docs to match
  canonical English wording (publish on tag push v*, smoke on master PRs)
- Add missing Rust-gate parity bullet to Vietnamese pr-workflow docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 18:07:45 -05:00
jordanthejet 73b862bb1f fix: address round-2 review comments on PR #2928
- docs/pr-workflow.md: replace hardcoded maintainer handles with
  generic WORKFLOW_OWNER_LOGINS + CODEOWNERS reference
- docs/vi/pr-workflow.md, docs/i18n/vi/pr-workflow.md: fix awkward
  "và hoặc là" phrasing, sync new branch-model bullets (workflow-owner
  config line + all PRs target master directly)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 17:20:44 -05:00
jordanthejet 31c027ed6d fix: replace remaining origin/main and main refs in release-process docs
- docs/release-process.md: 3 origin/main occurrences
- docs/vi/release-process.md: 3 origin/main + 6 `main` occurrences
- docs/i18n/vi/release-process.md: 3 origin/main + 6 `main` occurrences

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 16:53:03 -05:00
jordanthejet 571091ecef fix: resolve remaining main/dev references flagged in PR review
- docs/release-process.md: replace all `main` with `master` (lines 10,
  37, 45, 47, 54, 64, 73)
- docs/pr-workflow.md: fix "behind main" on line 219 to "behind master"
- docs/vi/pr-workflow.md: fix lines 216 and 273 (main -> master)
- docs/i18n/vi/pr-workflow.md: same fixes for canonical vi locale
- scripts/bootstrap.sh: fix raw URL from /main/ to /master/ (line 61),
  pin git clone to --branch master (line 909)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 16:37:36 -05:00
jordanthejet 44ac470d78 chore: migrate to single master branch model and update maintainers
- Replace all dev/main branch references with master across docs,
  templates, CI docs, and localized files (en, vi)
- Remove dev->main promotion model (no more Main Promotion Gate)
- Rename main-branch-flow.md to master-branch-flow.md and rewrite
  for single-branch workflow
- Update maintainers to theonlyhennygod and jordanthejet
- Update CODEOWNERS: replace @chumyin with @jordanthejet
- Update WORKFLOW_OWNER_LOGINS fallback references
- Update CODE_OF_CONDUCT enforcement contact to @argenistherose

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-06 13:01:32 -05:00
JordanTheJet 92e0f7aefd Merge pull request #2895 from zeroclaw-labs/sundai
ci: replace all workflows with simplified CI/CD pipeline
2026-03-05 22:52:53 -05:00
jordanthejet db536935bf fix: update coderabbit config for master branch and remove unsupported fields
- Changed base_branches from main/dev to master
- Removed unsupported base_branch_analysis field

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 22:36:24 -05:00
JordanTheJet d64be99621 Update .github/workflows/ci-full.yml
macbook version update

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-03-05 22:35:07 -05:00
jordanthejet f981d9ea69 fix: address CodeRabbit review comments
- Add concurrency group to promote-release workflow
- Fix markdown emphasis style in README (MD049)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 22:26:29 -05:00
jordanthejet 8230b26171 fix(ci): remove sccache, keep mold + nextest + no-incremental
sccache GHA cache backend is fragile — fails the entire build when
GitHub's artifact cache service is unavailable. Removed in favor of
Swatinem/rust-cache which handles failures gracefully.

Kept: mold linker, cargo-nextest, CARGO_INCREMENTAL=0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:53:24 -05:00
jordanthejet 4f62fb2ecb ci: retrigger after allowlist update 2026-03-05 21:51:11 -05:00
jordanthejet fee163fc74 perf(ci): add sccache, mold linker, and cargo-nextest for faster CI
- sccache: compiler caching for test builds (11-14% faster compilation)
- mold: faster linker on Linux builds
- cargo-nextest: parallel test runner (up to 35-60% faster tests)
- CARGO_INCREMENTAL=0: disable incremental compilation overhead in CI

Allowlist impact: added mozilla-actions/sccache-action@*

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:50:07 -05:00
jordanthejet 32e8dbbec5 feat(ci): split CI into auto (linux+macOS) and manual full matrix
Auto CI on PRs builds linux x86_64 and macOS arm64 only.
Remaining targets (linux arm64, macOS x86, Windows) available via
manual workflow_dispatch in ci-full.yml.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:43:33 -05:00
jordanthejet 6eef5bafcb feat(ci): full matrix build in CI + fix flaky bedrock test
- CI now builds across all 5 targets (linux x86/arm64, macOS x86/arm64,
  Windows) matching the release matrix
- Fix chat_fails_without_credentials test to accept "builder error"
  which occurs in CI environments without native TLS

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:40:45 -05:00
jordanthejet 53c1a3ecea fix(ci): target master branch instead of main
The default branch is master, not main. Updates CI and Beta Release
workflow triggers and corresponding docs references.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:34:33 -05:00
jordanthejet db917bc37b docs: update actions source policy for simplified workflow system
Reflects the complete workflow overhaul from 22 workflows to 3.
Updates allowlist to match current action usage and removes stale entries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:24:42 -05:00
jordanthejet 47ea46e694 chore: restore .coderabbit.yaml config
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:18:38 -05:00
jordanthejet 9923544769 ci: replace all workflows with simplified CI/CD pipeline
Remove 22 workflow files and 9 JS scripts. Replace with 3 workflows:
- ci.yml: test + build on PRs
- release.yml: auto beta release on merge to main
- promote-release.yml: manual stable release promotion

Update README Development section to document the new CI/CD system.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-05 21:15:36 -05:00
563 changed files with 61237 additions and 10375 deletions
+133
View File
@@ -0,0 +1,133 @@
# Skill: github-issue
File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
## When to Use
Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
## Instructions
You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
### Step 1: Detect Issue Type and Read the Template
Determine from the user's message whether this is a **bug report** or **feature request**.
- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
Then read the corresponding issue template to understand the required fields:
- Bug report: `.github/ISSUE_TEMPLATE/bug_report.yml`
- Feature request: `.github/ISSUE_TEMPLATE/feature_request.yml`
Parse the YAML to extract:
- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
- The `labels` array
- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
### Step 2: Auto-Gather Context
Before asking the user anything, silently gather environment and repo context:
```bash
# Git context
git log --oneline -5
git status --short
git diff --stat HEAD~1 2>/dev/null
# For bug reports — environment detection
uname -s -r -m # OS info
sw_vers 2>/dev/null # macOS version
rustc --version 2>/dev/null # Rust version
cargo metadata --format-version=1 --no-deps 2>/dev/null | jq -r '.packages[] | select(.name=="zeroclaw") | .version' 2>/dev/null # ZeroClaw version
git rev-parse --short HEAD # commit SHA fallback
```
Also read recently changed files to infer the affected component and architecture impact.
### Step 3: Pre-Fill and Present the Form
Using the parsed template fields and gathered context, draft values for ALL fields from the template:
- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
- **markdown** fields: skip these — they're informational headers, not form inputs.
- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
Present the complete draft to the user in a clean readable format:
```
## Issue Draft: [Bug]: <title> / [Feature]: <title>
**Labels**: <from template>
### <Field Label>
<proposed value or selection>
### <Field Label>
<proposed value>
...
```
Use AskUserQuestion to ask the user to review:
- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
If the user requests changes, update the draft and re-present. Iterate until the user approves.
### Step 4: Scope Guard
Before final submission, analyze the collected content for scope creep:
- Does the bug report describe multiple independent defects?
- Does the feature request bundle unrelated changes?
If multi-concept issues are detected:
1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
2. Break down the distinct groups found.
3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
4. Let the user decide: proceed as-is or split.
### Step 5: Construct Issue Body
Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### <Field Label>` sections, so use that exact structure.
For each non-markdown field from the template, in order:
```markdown
### <attributes.label>
<value>
```
For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
For checkbox fields, render each option as:
```markdown
- [X] <option label text>
```
### Step 6: Final Preview and Submit
Show the final constructed issue (title + labels + full body) for one last confirmation.
Then submit using a HEREDOC for the body to preserve formatting:
```bash
gh issue create --title "<title prefix><user title>" --label "<label1>,<label2>" --body "$(cat <<'ISSUE_EOF'
<body content>
ISSUE_EOF
)"
```
Return the resulting issue URL to the user.
### Important Rules
- **Always read the template file** — never assume field names, options, or structure. The templates are the source of truth and may change over time.
- **Never include personal/sensitive data** in the issue. Redact secrets, tokens, emails, real names.
- **Use neutral project-scoped placeholders** per ZeroClaw's privacy contract.
- **One concept per issue** — enforce the scope guard.
- **Auto-detect, don't guess** — use real command output for environment fields.
- **Match GitHub's rendering** — use `### Field Label` sections so issues look consistent whether filed via web UI or this skill.
+209
View File
@@ -0,0 +1,209 @@
# Skill: github-pr
Open or update a GitHub Pull Request for ZeroClaw. Handles creating new PRs with a fully filled-out template body, and updating existing PRs (title, body sections, labels, comments). Use this skill whenever the user wants to open a PR, create a pull request, update a PR, edit PR description, add labels to a PR, or sync a PR after new commits — even if they don't say "PR" explicitly (e.g., "submit this for review", "push and open for merge").
## Instructions
This skill supports two modes: **Open** (create a new PR) and **Update** (edit an existing PR). Detect the mode from context — if there's already an open PR for the current branch and the user didn't say "open a new PR", default to update mode.
The PR template at `.github/pull_request_template.md` is the source of truth for the PR body structure. Read it every time — never assume or hardcode section names, fields, or their order. The template may change over time and the skill should always reflect its current state.
---
## Shared: Read the PR Template
Before opening or updating a PR body, read `.github/pull_request_template.md` and parse it to understand:
- The `## ` section headers (these are the top-level sections of the PR body)
- The bullet points, fields, and prompts within each section
- Which sections are marked `(required)` vs optional/recommended
- Any inline formatting conventions (backtick options, Yes/No fields, etc.)
This parsed structure drives how you fill, present, and edit the PR body.
---
## Mode: Open a New PR
### Step 1: Gather Context
Collect information to pre-fill the PR body. Run these in parallel:
```bash
# Branch and commit context
git branch --show-current
git log master..HEAD --oneline
git diff master...HEAD --stat
# Check if branch is pushed
git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null
# Environment (for validation evidence)
rustc --version 2>/dev/null
```
Also review the changed files and commit messages to understand the nature of the change (bug fix, feature, refactor, docs, chore, etc.) and which subsystems are affected.
### Step 2: Pre-Fill the Template
Using the parsed template structure and gathered context, draft a complete PR body:
- For each `## ` section from the template, fill in the bullet points and fields based on context from the commits, diff, and changed files.
- Use the field descriptions and placeholder text in the template as guidance for what each field expects.
- For Yes/No fields, infer from the diff (e.g., if no files in `src/security/` changed, security impact is likely all No).
- For required sections, always provide a substantive answer. For optional sections, fill if there's enough context, otherwise leave the template prompts in place.
- Draft a conventional commit-style PR title based on the changes (e.g., `feat(provider): add retry budget override`, `fix(channel): handle disconnect gracefully`, `chore(ci): update workflow targets`).
### Step 3: Present Draft for Review
Show the user the complete draft:
```
## PR Draft: <title>
**Branch**: <head> -> master
**Labels**: <suggested labels>
<full body with all sections filled>
```
Ask the user to review: "Here's the pre-filled PR. Review and let me know what to change, or say 'submit' to open it."
Iterate on changes until the user approves.
### Step 4: Push and Create
1. If the branch isn't pushed yet, push it:
```bash
git push -u origin <branch>
```
2. Create the PR using a HEREDOC for the body:
```bash
gh pr create --title "<title>" --base master --body "$(cat <<'PR_BODY_EOF'
<full body>
PR_BODY_EOF
)"
```
3. If labels were agreed on, add them:
```bash
gh pr edit <number> --add-label "<label1>,<label2>"
```
4. Return the PR URL to the user.
---
## Mode: Update an Existing PR
### Step 1: Identify the PR
1. **If a PR number or URL is given**: use that directly.
2. **If on a branch with an open PR**: auto-detect:
```bash
gh pr view --json number,title,body,labels,state,author,url,headRefName 2>/dev/null
```
3. **If neither**: ask the user for the PR number.
Verify the current user is the PR author:
```bash
CURRENT_USER=$(gh api user --jq '.login')
PR_AUTHOR=$(gh pr view <number> --json author --jq '.author.login')
```
If not the author, stop and inform the user.
### Step 2: Fetch Current State
```bash
gh pr view <number> --json number,title,body,labels,state,baseRefName,headRefName,url,author,reviewDecision,statusCheckRollup,commits
```
Display a summary:
```
## PR #<number>: <title>
**State**: <open/closed/merged>
**Branch**: <head> -> <base>
**Labels**: <label list>
**Checks**: <pass/fail/pending>
**URL**: <url>
```
### Step 3: Determine What to Update
Support these operations:
| Operation | How |
|---|---|
| **Edit title** | `gh pr edit <number> --title "<new title>"` |
| **Edit full body** | `gh pr edit <number> --body "<new body>"` |
| **Add labels** | `gh pr edit <number> --add-label "<label1>,<label2>"` |
| **Remove labels** | `gh pr edit <number> --remove-label "<label1>"` |
| **Edit specific section** | Parse body by `## ` headers, modify target section, re-submit full body |
| **Add a comment** | `gh pr comment <number> --body "<comment>"` |
| **Link an issue** | Edit the linked-issue section in the body |
| **Smart update after new commits** | Re-analyze and suggest section updates |
### Step 4: Handle Body Section Edits
When editing a specific section:
1. Parse the current PR body into sections by `## ` headers
2. Match the user's request to the corresponding section from the template
3. Show the current content of that section and the proposed replacement
4. On confirmation, modify only that section, reconstruct the full body, and submit
### Step 5: Smart Update After New Commits
When the user wants to sync the PR description after pushing new changes:
1. Identify new commits:
```bash
gh pr view <number> --json commits --jq '.commits[].messageHeadline'
git log <base>..<head> --oneline
git diff <base>...<head> --stat
```
2. Re-read the PR template. Analyze which sections are now stale based on the new changes — use the template's section names and field descriptions to identify what needs updating rather than relying on hardcoded assumptions.
3. Present proposed updates section-by-section and confirm before applying.
### Step 6: Apply Updates
For title/label changes, use direct `gh pr edit` flags.
For body edits, use a HEREDOC:
```bash
gh pr edit <number> --body "$(cat <<'PR_BODY_EOF'
<full updated body>
PR_BODY_EOF
)"
```
For comments:
```bash
gh pr comment <number> --body "$(cat <<'COMMENT_EOF'
<comment text>
COMMENT_EOF
)"
```
### Step 7: Confirm
Fetch and display the updated state:
```bash
gh pr view <number> --json number,title,labels,url
```
Return the PR URL.
---
## Important Rules
- **Always read `.github/pull_request_template.md`** before filling or editing a PR body. Never assume section names, fields, or structure — derive everything from the template. It's the source of truth and may change.
- **For updates, only modify requested sections.** Preserve everything else exactly as-is.
- **Always show diffs before applying body edits.** Present current vs proposed for each changed section.
- **Never include personal/sensitive data** in PR content per ZeroClaw's privacy contract.
- **For label changes**, only use labels that exist in the repository. Check with `gh label list` if unsure.
- **Fetch the latest body before editing** to avoid clobbering concurrent changes.
- **For new PRs**, push the branch before creating (with `-u` to set upstream tracking).
+202
View File
@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+485
View File
@@ -0,0 +1,485 @@
---
name: skill-creator
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
---
# Skill Creator
A skill for creating new skills and iteratively improving them.
At a high level, the process of creating a skill goes like this:
- Decide what you want the skill to do and roughly how it should do it
- Write a draft of the skill
- Create a few test prompts and run claude-with-access-to-the-skill on them
- Help the user evaluate the results both qualitatively and quantitatively
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
- Repeat until you're satisfied
- Expand the test set and try again at larger scale
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
Cool? Cool.
## Communicating with the user
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
- "evaluation" and "benchmark" are borderline, but OK
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
---
## Creating a skill
### Capture Intent
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
1. What should this skill enable Claude to do?
2. When should this skill trigger? (what user phrases/contexts)
3. What's the expected output format?
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
### Interview and Research
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
### Write the SKILL.md
Based on the user interview, fill in these components:
- **name**: Skill identifier
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
- **compatibility**: Required tools, dependencies (optional, rarely needed)
- **the rest of the skill :)**
### Skill Writing Guide
#### Anatomy of a Skill
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter (name, description required)
│ └── Markdown instructions
└── Bundled Resources (optional)
├── scripts/ - Executable code for deterministic/repetitive tasks
├── references/ - Docs loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts)
```
#### Progressive Disclosure
Skills use a three-level loading system:
1. **Metadata** (name + description) - Always in context (~100 words)
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
These word counts are approximate and you can feel free to go longer if needed.
**Key patterns:**
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
- Reference files clearly from SKILL.md with guidance on when to read them
- For large reference files (>300 lines), include a table of contents
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + selection)
└── references/
├── aws.md
├── gcp.md
└── azure.md
```
Claude reads only the relevant reference file.
#### Principle of Lack of Surprise
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
#### Writing Patterns
Prefer using the imperative form in instructions.
**Defining output formats** - You can do it like this:
```markdown
## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations
```
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
```markdown
## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authentication
```
### Writing Style
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
### Test Cases
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's task prompt",
"expected_output": "Description of expected result",
"files": []
}
]
}
```
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
## Running and evaluating test cases
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
**With-skill run:**
```
Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
```
**Baseline run** (same prompt, but the baseline depends on context):
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
```json
{
"eval_id": 0,
"eval_name": "descriptive-name-here",
"prompt": "The user's task prompt",
"assertions": []
}
```
### Step 2: While runs are in progress, draft assertions
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
### Step 3: As runs complete, capture timing data
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3
}
```
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
### Step 4: Grade, aggregate, and launch the viewer
Once all runs are done:
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
```bash
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
```
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
Put each with_skill version before its baseline counterpart.
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
4. **Launch the viewer** with both qualitative outputs and quantitative data:
```bash
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
<workspace>/iteration-N \
--skill-name "my-skill" \
--benchmark <workspace>/iteration-N/benchmark.json \
> /dev/null 2>&1 &
VIEWER_PID=$!
```
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
### What the user sees in the viewer
The "Outputs" tab shows one test case at a time:
- **Prompt**: the task that was given
- **Output**: the files the skill produced, rendered inline where possible
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
- **Feedback**: a textbox that auto-saves as they type
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
### Step 5: Read the feedback
When the user tells you they're done, read `feedback.json`:
```json
{
"reviews": [
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
],
"status": "complete"
}
```
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
Kill the viewer server when you're done with it:
```bash
kill $VIEWER_PID 2>/dev/null
```
---
## Improving the skill
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
### How to think about improvements
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
### The iteration loop
After improving the skill:
1. Apply your improvements to the skill
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
4. Wait for the user to review and tell you they're done
5. Read the new feedback, improve again, repeat
Keep going until:
- The user says they're happy
- The feedback is all empty (everything looks good)
- You're not making meaningful progress
---
## Advanced: Blind comparison
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
---
## Description Optimization
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
### Step 1: Generate trigger eval queries
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
```json
[
{"query": "the user prompt", "should_trigger": true},
{"query": "another prompt", "should_trigger": false}
]
```
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
### Step 2: Review with user
Present the eval set to the user for review using the HTML template:
1. Read the template from `assets/eval_review.html`
2. Replace the placeholders:
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
This step matters — bad eval queries lead to bad descriptions.
### Step 3: Run the optimization loop
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
Save the eval set to the workspace, then run in the background:
```bash
python -m scripts.run_loop \
--eval-set <path-to-trigger-eval.json> \
--skill-path <path-to-skill> \
--model <model-id-powering-this-session> \
--max-iterations 5 \
--verbose
```
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
### How skill triggering works
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
### Step 4: Apply the result
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
---
### Package and Present (only if `present_files` tool is available)
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
```bash
python -m scripts.package_skill <path/to/skill-folder>
```
After packaging, direct the user to the resulting `.skill` file path so they can install it.
---
## Claude.ai-specific instructions
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
**Blind comparison**: Requires subagents. Skip it.
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
---
## Cowork-Specific Instructions
If you're in Cowork, the main things to know are:
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
- Packaging works — `package_skill.py` just needs Python and a filesystem.
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
---
## Reference files
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
- `agents/grader.md` — How to evaluate assertions against outputs
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
- `agents/analyzer.md` — How to analyze why one version beat another
The references/ directory has additional documentation:
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
---
Repeating one more time the core loop here for emphasis:
- Figure out what the skill is about
- Draft or edit the skill
- Run claude-with-access-to-the-skill on test prompts
- With the user, evaluate the outputs:
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
- Run quantitative evals
- Repeat until you and the user are satisfied
- Package the final skill and return it to the user.
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
Good luck!
@@ -0,0 +1,274 @@
# Post-hoc Analyzer Agent
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
## Role
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
## Inputs
You receive these parameters in your prompt:
- **winner**: "A" or "B" (from blind comparison)
- **winner_skill_path**: Path to the skill that produced the winning output
- **winner_transcript_path**: Path to the execution transcript for the winner
- **loser_skill_path**: Path to the skill that produced the losing output
- **loser_transcript_path**: Path to the execution transcript for the loser
- **comparison_result_path**: Path to the blind comparator's output JSON
- **output_path**: Where to save the analysis results
## Process
### Step 1: Read Comparison Result
1. Read the blind comparator's output at comparison_result_path
2. Note the winning side (A or B), the reasoning, and any scores
3. Understand what the comparator valued in the winning output
### Step 2: Read Both Skills
1. Read the winner skill's SKILL.md and key referenced files
2. Read the loser skill's SKILL.md and key referenced files
3. Identify structural differences:
- Instructions clarity and specificity
- Script/tool usage patterns
- Example coverage
- Edge case handling
### Step 3: Read Both Transcripts
1. Read the winner's transcript
2. Read the loser's transcript
3. Compare execution patterns:
- How closely did each follow their skill's instructions?
- What tools were used differently?
- Where did the loser diverge from optimal behavior?
- Did either encounter errors or make recovery attempts?
### Step 4: Analyze Instruction Following
For each transcript, evaluate:
- Did the agent follow the skill's explicit instructions?
- Did the agent use the skill's provided tools/scripts?
- Were there missed opportunities to leverage skill content?
- Did the agent add unnecessary steps not in the skill?
Score instruction following 1-10 and note specific issues.
### Step 5: Identify Winner Strengths
Determine what made the winner better:
- Clearer instructions that led to better behavior?
- Better scripts/tools that produced better output?
- More comprehensive examples that guided edge cases?
- Better error handling guidance?
Be specific. Quote from skills/transcripts where relevant.
### Step 6: Identify Loser Weaknesses
Determine what held the loser back:
- Ambiguous instructions that led to suboptimal choices?
- Missing tools/scripts that forced workarounds?
- Gaps in edge case coverage?
- Poor error handling that caused failures?
### Step 7: Generate Improvement Suggestions
Based on the analysis, produce actionable suggestions for improving the loser skill:
- Specific instruction changes to make
- Tools/scripts to add or modify
- Examples to include
- Edge cases to address
Prioritize by impact. Focus on changes that would have changed the outcome.
### Step 8: Write Analysis Results
Save structured analysis to `{output_path}`.
## Output Format
Write a JSON file with this structure:
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors",
"Explicit guidance on fallback behavior when OCR fails"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise and made errors",
"No guidance on OCR failure, agent gave up instead of trying alternatives"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": [
"Minor: skipped optional logging step"
]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3",
"Missed the 'always validate output' instruction"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
},
{
"priority": "high",
"category": "tools",
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
"expected_impact": "Would catch formatting errors before final output"
},
{
"priority": "medium",
"category": "error_handling",
"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
"expected_impact": "Would prevent early failure on difficult documents"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
}
}
```
## Guidelines
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
- **Be actionable**: Suggestions should be concrete changes, not vague advice
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
- **Stay objective**: Analyze what happened, don't editorialize
- **Think about generalization**: Would this improvement help on other evals too?
## Categories for Suggestions
Use these categories to organize improvement suggestions:
| Category | Description |
|----------|-------------|
| `instructions` | Changes to the skill's prose instructions |
| `tools` | Scripts, templates, or utilities to add/modify |
| `examples` | Example inputs/outputs to include |
| `error_handling` | Guidance for handling failures |
| `structure` | Reorganization of skill content |
| `references` | External docs or resources to add |
## Priority Levels
- **high**: Would likely change the outcome of this comparison
- **medium**: Would improve quality but may not change win/loss
- **low**: Nice to have, marginal improvement
---
# Analyzing Benchmark Results
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
## Role
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
## Inputs
You receive these parameters in your prompt:
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
- **skill_path**: Path to the skill being benchmarked
- **output_path**: Where to save the notes (as JSON array of strings)
## Process
### Step 1: Read Benchmark Data
1. Read the benchmark.json containing all run results
2. Note the configurations tested (with_skill, without_skill)
3. Understand the run_summary aggregates already calculated
### Step 2: Analyze Per-Assertion Patterns
For each expectation across all runs:
- Does it **always pass** in both configurations? (may not differentiate skill value)
- Does it **always fail** in both configurations? (may be broken or beyond capability)
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
- Does it **always fail with skill but pass without**? (skill may be hurting)
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
### Step 3: Analyze Cross-Eval Patterns
Look for patterns across evals:
- Are certain eval types consistently harder/easier?
- Do some evals show high variance while others are stable?
- Are there surprising results that contradict expectations?
### Step 4: Analyze Metrics Patterns
Look at time_seconds, tokens, tool_calls:
- Does the skill significantly increase execution time?
- Is there high variance in resource usage?
- Are there outlier runs that skew the aggregates?
### Step 5: Generate Notes
Write freeform observations as a list of strings. Each note should:
- State a specific observation
- Be grounded in the data (not speculation)
- Help the user understand something the aggregate metrics don't show
Examples:
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
- "Skill adds 13s average execution time but improves pass rate by 50%"
- "Token usage is 80% higher with skill, primarily due to script output parsing"
- "All 3 without-skill runs for eval 1 produced empty output"
### Step 6: Write Notes
Save notes to `{output_path}` as a JSON array of strings:
```json
[
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
"Without-skill runs consistently fail on table extraction expectations",
"Skill adds 13s average execution time but improves pass rate by 50%"
]
```
## Guidelines
**DO:**
- Report what you observe in the data
- Be specific about which evals, expectations, or runs you're referring to
- Note patterns that aggregate metrics would hide
- Provide context that helps interpret the numbers
**DO NOT:**
- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
- Make subjective quality judgments ("the output was good/bad")
- Speculate about causes without evidence
- Repeat information already in the run_summary aggregates
@@ -0,0 +1,202 @@
# Blind Comparator Agent
Compare two outputs WITHOUT knowing which skill produced them.
## Role
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
Your judgment is based purely on output quality and task completion.
## Inputs
You receive these parameters in your prompt:
- **output_a_path**: Path to the first output file or directory
- **output_b_path**: Path to the second output file or directory
- **eval_prompt**: The original task/prompt that was executed
- **expectations**: List of expectations to check (optional - may be empty)
## Process
### Step 1: Read Both Outputs
1. Examine output A (file or directory)
2. Examine output B (file or directory)
3. Note the type, structure, and content of each
4. If outputs are directories, examine all relevant files inside
### Step 2: Understand the Task
1. Read the eval_prompt carefully
2. Identify what the task requires:
- What should be produced?
- What qualities matter (accuracy, completeness, format)?
- What would distinguish a good output from a poor one?
### Step 3: Generate Evaluation Rubric
Based on the task, generate a rubric with two dimensions:
**Content Rubric** (what the output contains):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Correctness | Major errors | Minor errors | Fully correct |
| Completeness | Missing key elements | Mostly complete | All elements present |
| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
**Structure Rubric** (how the output is organized):
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|-----------|----------|----------------|---------------|
| Organization | Disorganized | Reasonably organized | Clear, logical structure |
| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
| Usability | Difficult to use | Usable with effort | Easy to use |
Adapt criteria to the specific task. For example:
- PDF form → "Field alignment", "Text readability", "Data placement"
- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
- Data output → "Schema correctness", "Data types", "Completeness"
### Step 4: Evaluate Each Output Against the Rubric
For each output (A and B):
1. **Score each criterion** on the rubric (1-5 scale)
2. **Calculate dimension totals**: Content score, Structure score
3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
### Step 5: Check Assertions (if provided)
If expectations are provided:
1. Check each expectation against output A
2. Check each expectation against output B
3. Count pass rates for each output
4. Use expectation scores as secondary evidence (not the primary decision factor)
### Step 6: Determine the Winner
Compare A and B based on (in priority order):
1. **Primary**: Overall rubric score (content + structure)
2. **Secondary**: Assertion pass rates (if applicable)
3. **Tiebreaker**: If truly equal, declare a TIE
Be decisive - ties should be rare. One output is usually better, even if marginally.
### Step 7: Write Comparison Results
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
## Output Format
Write a JSON file with this structure:
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
},
"B": {
"score": 5,
"strengths": ["Readable output", "Correct basic structure"],
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
}
},
"expectation_results": {
"A": {
"passed": 4,
"total": 5,
"pass_rate": 0.80,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": true},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
},
"B": {
"passed": 3,
"total": 5,
"pass_rate": 0.60,
"details": [
{"text": "Output includes name", "passed": true},
{"text": "Output includes date", "passed": false},
{"text": "Format is PDF", "passed": true},
{"text": "Contains signature", "passed": false},
{"text": "Readable text", "passed": true}
]
}
}
}
```
If no expectations were provided, omit the `expectation_results` field entirely.
## Field Descriptions
- **winner**: "A", "B", or "TIE"
- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
- **rubric**: Structured rubric evaluation for each output
- **content**: Scores for content criteria (correctness, completeness, accuracy)
- **structure**: Scores for structure criteria (organization, formatting, usability)
- **content_score**: Average of content criteria (1-5)
- **structure_score**: Average of structure criteria (1-5)
- **overall_score**: Combined score scaled to 1-10
- **output_quality**: Summary quality assessment
- **score**: 1-10 rating (should match rubric overall_score)
- **strengths**: List of positive aspects
- **weaknesses**: List of issues or shortcomings
- **expectation_results**: (Only if expectations provided)
- **passed**: Number of expectations that passed
- **total**: Total number of expectations
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **details**: Individual expectation results
## Guidelines
- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
- **Output quality first**: Assertion scores are secondary to overall task completion.
- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.
@@ -0,0 +1,223 @@
# Grader Agent
Evaluate expectations against an execution transcript and outputs.
## Role
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
## Inputs
You receive these parameters in your prompt:
- **expectations**: List of expectations to evaluate (strings)
- **transcript_path**: Path to the execution transcript (markdown file)
- **outputs_dir**: Directory containing output files from execution
## Process
### Step 1: Read the Transcript
1. Read the transcript file completely
2. Note the eval prompt, execution steps, and final result
3. Identify any issues or errors documented
### Step 2: Examine Output Files
1. List files in outputs_dir
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
3. Note contents, structure, and quality
### Step 3: Evaluate Each Assertion
For each expectation:
1. **Search for evidence** in the transcript and outputs
2. **Determine verdict**:
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
3. **Cite the evidence**: Quote the specific text or describe what you found
### Step 4: Extract and Verify Claims
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
1. **Extract claims** from the transcript and outputs:
- Factual statements ("The form has 12 fields")
- Process claims ("Used pypdf to fill the form")
- Quality claims ("All fields were filled correctly")
2. **Verify each claim**:
- **Factual claims**: Can be checked against the outputs or external sources
- **Process claims**: Can be verified from the transcript
- **Quality claims**: Evaluate whether the claim is justified
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
This catches issues that predefined expectations might miss.
### Step 5: Read User Notes
If `{outputs_dir}/user_notes.md` exists:
1. Read it and note any uncertainties or issues flagged by the executor
2. Include relevant concerns in the grading output
3. These may reveal problems even when expectations pass
### Step 6: Critique the Evals
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
Suggestions worth raising:
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
- An important outcome you observed — good or bad — that no assertion covers at all
- An assertion that can't actually be verified from the available outputs
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
### Step 7: Write Grading Results
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
## Grading Criteria
**PASS when**:
- The transcript or outputs clearly demonstrate the expectation is true
- Specific evidence can be cited
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
**FAIL when**:
- No evidence found for the expectation
- Evidence contradicts the expectation
- The expectation cannot be verified from available information
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
- The output appears to meet the assertion by coincidence rather than by actually doing the work
**When uncertain**: The burden of proof to pass is on the expectation.
### Step 8: Read Executor Metrics and Timing
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
## Output Format
Write a JSON file with this structure:
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
},
{
"text": "The assistant used the skill's OCR script",
"passed": true,
"evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
}
],
"summary": {
"passed": 2,
"failed": 1,
"total": 3,
"pass_rate": 0.67
},
"execution_metrics": {
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8
},
"total_tool_calls": 15,
"total_steps": 6,
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
},
"timing": {
"executor_duration_seconds": 165.0,
"grader_duration_seconds": 26.0,
"total_duration_seconds": 191.0
},
"claims": [
{
"claim": "The form has 12 fillable fields",
"type": "factual",
"verified": true,
"evidence": "Counted 12 fields in field_info.json"
},
{
"claim": "All required fields were populated",
"type": "quality",
"verified": false,
"evidence": "Reference section was left blank despite data being available"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
},
{
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
}
],
"overall": "Assertions check presence but not correctness. Consider adding content verification."
}
}
```
## Field Descriptions
- **expectations**: Array of graded expectations
- **text**: The original expectation text
- **passed**: Boolean - true if expectation passes
- **evidence**: Specific quote or description supporting the verdict
- **summary**: Aggregate statistics
- **passed**: Count of passed expectations
- **failed**: Count of failed expectations
- **total**: Total expectations evaluated
- **pass_rate**: Fraction passed (0.0 to 1.0)
- **execution_metrics**: Copied from executor's metrics.json (if available)
- **output_chars**: Total character count of output files (proxy for tokens)
- **transcript_chars**: Character count of transcript
- **timing**: Wall clock timing from timing.json (if available)
- **executor_duration_seconds**: Time spent in executor subagent
- **total_duration_seconds**: Total elapsed time for the run
- **claims**: Extracted and verified claims from the output
- **claim**: The statement being verified
- **type**: "factual", "process", or "quality"
- **verified**: Boolean - whether the claim holds
- **evidence**: Supporting or contradicting evidence
- **user_notes_summary**: Issues flagged by the executor
- **uncertainties**: Things the executor wasn't sure about
- **needs_review**: Items requiring human attention
- **workarounds**: Places where the skill didn't work as expected
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
## Guidelines
- **Be objective**: Base verdicts on evidence, not assumptions
- **Be specific**: Quote the exact text that supports your verdict
- **Be thorough**: Check both transcript and output files
- **Be consistent**: Apply the same standard to each expectation
- **Explain failures**: Make it clear why evidence was insufficient
- **No partial credit**: Each expectation is pass or fail, not partial
@@ -0,0 +1,146 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Eval Set Review - __SKILL_NAME_PLACEHOLDER__</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<style>
* { box-sizing: border-box; margin: 0; padding: 0; }
body { font-family: 'Lora', Georgia, serif; background: #faf9f5; padding: 2rem; color: #141413; }
h1 { font-family: 'Poppins', sans-serif; margin-bottom: 0.5rem; font-size: 1.5rem; }
.description { color: #b0aea5; margin-bottom: 1.5rem; font-style: italic; max-width: 900px; }
.controls { margin-bottom: 1rem; display: flex; gap: 0.5rem; }
.btn { font-family: 'Poppins', sans-serif; padding: 0.5rem 1rem; border: none; border-radius: 6px; cursor: pointer; font-size: 0.875rem; font-weight: 500; }
.btn-add { background: #6a9bcc; color: white; }
.btn-add:hover { background: #5889b8; }
.btn-export { background: #d97757; color: white; }
.btn-export:hover { background: #c4613f; }
table { width: 100%; max-width: 1100px; border-collapse: collapse; background: white; border-radius: 6px; overflow: hidden; box-shadow: 0 1px 3px rgba(0,0,0,0.08); }
th { font-family: 'Poppins', sans-serif; background: #141413; color: #faf9f5; padding: 0.75rem 1rem; text-align: left; font-size: 0.875rem; }
td { padding: 0.75rem 1rem; border-bottom: 1px solid #e8e6dc; vertical-align: top; }
tr:nth-child(even) td { background: #faf9f5; }
tr:hover td { background: #f3f1ea; }
.section-header td { background: #e8e6dc; font-family: 'Poppins', sans-serif; font-weight: 500; font-size: 0.8rem; color: #141413; text-transform: uppercase; letter-spacing: 0.05em; }
.query-input { width: 100%; padding: 0.4rem; border: 1px solid #e8e6dc; border-radius: 4px; font-size: 0.875rem; font-family: 'Lora', Georgia, serif; resize: vertical; min-height: 60px; }
.query-input:focus { outline: none; border-color: #d97757; box-shadow: 0 0 0 2px rgba(217,119,87,0.15); }
.toggle { position: relative; display: inline-block; width: 44px; height: 24px; }
.toggle input { opacity: 0; width: 0; height: 0; }
.toggle .slider { position: absolute; inset: 0; background: #b0aea5; border-radius: 24px; cursor: pointer; transition: 0.2s; }
.toggle .slider::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; bottom: 3px; background: white; border-radius: 50%; transition: 0.2s; }
.toggle input:checked + .slider { background: #d97757; }
.toggle input:checked + .slider::before { transform: translateX(20px); }
.btn-delete { background: #c44; color: white; padding: 0.3rem 0.6rem; border: none; border-radius: 4px; cursor: pointer; font-size: 0.75rem; font-family: 'Poppins', sans-serif; }
.btn-delete:hover { background: #a33; }
.summary { margin-top: 1rem; color: #b0aea5; font-size: 0.875rem; }
</style>
</head>
<body>
<h1>Eval Set Review: <span id="skill-name">__SKILL_NAME_PLACEHOLDER__</span></h1>
<p class="description">Current description: <span id="skill-desc">__SKILL_DESCRIPTION_PLACEHOLDER__</span></p>
<div class="controls">
<button class="btn btn-add" onclick="addRow()">+ Add Query</button>
<button class="btn btn-export" onclick="exportEvalSet()">Export Eval Set</button>
</div>
<table>
<thead>
<tr>
<th style="width:65%">Query</th>
<th style="width:18%">Should Trigger</th>
<th style="width:10%">Actions</th>
</tr>
</thead>
<tbody id="eval-body"></tbody>
</table>
<p class="summary" id="summary"></p>
<script>
const EVAL_DATA = __EVAL_DATA_PLACEHOLDER__;
let evalItems = [...EVAL_DATA];
function render() {
const tbody = document.getElementById('eval-body');
tbody.innerHTML = '';
// Sort: should-trigger first, then should-not-trigger
const sorted = evalItems
.map((item, origIdx) => ({ ...item, origIdx }))
.sort((a, b) => (b.should_trigger ? 1 : 0) - (a.should_trigger ? 1 : 0));
let lastGroup = null;
sorted.forEach(item => {
const group = item.should_trigger ? 'trigger' : 'no-trigger';
if (group !== lastGroup) {
const headerRow = document.createElement('tr');
headerRow.className = 'section-header';
headerRow.innerHTML = `<td colspan="3">${item.should_trigger ? 'Should Trigger' : 'Should NOT Trigger'}</td>`;
tbody.appendChild(headerRow);
lastGroup = group;
}
const idx = item.origIdx;
const tr = document.createElement('tr');
tr.innerHTML = `
<td><textarea class="query-input" onchange="updateQuery(${idx}, this.value)">${escapeHtml(item.query)}</textarea></td>
<td>
<label class="toggle">
<input type="checkbox" ${item.should_trigger ? 'checked' : ''} onchange="updateTrigger(${idx}, this.checked)">
<span class="slider"></span>
</label>
<span style="margin-left:8px;font-size:0.8rem;color:#b0aea5">${item.should_trigger ? 'Yes' : 'No'}</span>
</td>
<td><button class="btn-delete" onclick="deleteRow(${idx})">Delete</button></td>
`;
tbody.appendChild(tr);
});
updateSummary();
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function updateQuery(idx, value) { evalItems[idx].query = value; updateSummary(); }
function updateTrigger(idx, value) { evalItems[idx].should_trigger = value; render(); }
function deleteRow(idx) { evalItems.splice(idx, 1); render(); }
function addRow() {
evalItems.push({ query: '', should_trigger: true });
render();
const inputs = document.querySelectorAll('.query-input');
inputs[inputs.length - 1].focus();
}
function updateSummary() {
const trigger = evalItems.filter(i => i.should_trigger).length;
const noTrigger = evalItems.filter(i => !i.should_trigger).length;
document.getElementById('summary').textContent =
`${evalItems.length} queries total: ${trigger} should trigger, ${noTrigger} should not trigger`;
}
function exportEvalSet() {
const valid = evalItems.filter(i => i.query.trim() !== '');
const data = valid.map(i => ({ query: i.query.trim(), should_trigger: i.should_trigger }));
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'eval_set.json';
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
}
render();
</script>
</body>
</html>
@@ -0,0 +1,471 @@
#!/usr/bin/env python3
"""Generate and serve a review page for eval results.
Reads the workspace directory, discovers runs (directories with outputs/),
embeds all output data into a self-contained HTML page, and serves it via
a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
Usage:
python generate_review.py <workspace-path> [--port PORT] [--skill-name NAME]
python generate_review.py <workspace-path> --previous-feedback /path/to/old/feedback.json
No dependencies beyond the Python stdlib are required.
"""
import argparse
import base64
import json
import mimetypes
import os
import re
import signal
import subprocess
import sys
import time
import webbrowser
from functools import partial
from http.server import HTTPServer, BaseHTTPRequestHandler
from pathlib import Path
# Files to exclude from output listings
METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
# Extensions we render as inline text
TEXT_EXTENSIONS = {
".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
}
# Extensions we render as inline images
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
# MIME type overrides for common types
MIME_OVERRIDES = {
".svg": "image/svg+xml",
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
}
def get_mime_type(path: Path) -> str:
ext = path.suffix.lower()
if ext in MIME_OVERRIDES:
return MIME_OVERRIDES[ext]
mime, _ = mimetypes.guess_type(str(path))
return mime or "application/octet-stream"
def find_runs(workspace: Path) -> list[dict]:
"""Recursively find directories that contain an outputs/ subdirectory."""
runs: list[dict] = []
_find_runs_recursive(workspace, workspace, runs)
runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
return runs
def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
if not current.is_dir():
return
outputs_dir = current / "outputs"
if outputs_dir.is_dir():
run = build_run(root, current)
if run:
runs.append(run)
return
skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
for child in sorted(current.iterdir()):
if child.is_dir() and child.name not in skip:
_find_runs_recursive(root, child, runs)
def build_run(root: Path, run_dir: Path) -> dict | None:
"""Build a run dict with prompt, outputs, and grading data."""
prompt = ""
eval_id = None
# Try eval_metadata.json
for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
if candidate.exists():
try:
metadata = json.loads(candidate.read_text())
prompt = metadata.get("prompt", "")
eval_id = metadata.get("eval_id")
except (json.JSONDecodeError, OSError):
pass
if prompt:
break
# Fall back to transcript.md
if not prompt:
for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
if candidate.exists():
try:
text = candidate.read_text()
match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
if match:
prompt = match.group(1).strip()
except OSError:
pass
if prompt:
break
if not prompt:
prompt = "(No prompt found)"
run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
# Collect output files
outputs_dir = run_dir / "outputs"
output_files: list[dict] = []
if outputs_dir.is_dir():
for f in sorted(outputs_dir.iterdir()):
if f.is_file() and f.name not in METADATA_FILES:
output_files.append(embed_file(f))
# Load grading if present
grading = None
for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
if candidate.exists():
try:
grading = json.loads(candidate.read_text())
except (json.JSONDecodeError, OSError):
pass
if grading:
break
return {
"id": run_id,
"prompt": prompt,
"eval_id": eval_id,
"outputs": output_files,
"grading": grading,
}
def embed_file(path: Path) -> dict:
"""Read a file and return an embedded representation."""
ext = path.suffix.lower()
mime = get_mime_type(path)
if ext in TEXT_EXTENSIONS:
try:
content = path.read_text(errors="replace")
except OSError:
content = "(Error reading file)"
return {
"name": path.name,
"type": "text",
"content": content,
}
elif ext in IMAGE_EXTENSIONS:
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "image",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".pdf":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "pdf",
"data_uri": f"data:{mime};base64,{b64}",
}
elif ext == ".xlsx":
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "xlsx",
"data_b64": b64,
}
else:
# Binary / unknown — base64 download link
try:
raw = path.read_bytes()
b64 = base64.b64encode(raw).decode("ascii")
except OSError:
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
return {
"name": path.name,
"type": "binary",
"mime": mime,
"data_uri": f"data:{mime};base64,{b64}",
}
def load_previous_iteration(workspace: Path) -> dict[str, dict]:
"""Load previous iteration's feedback and outputs.
Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
"""
result: dict[str, dict] = {}
# Load feedback
feedback_map: dict[str, str] = {}
feedback_path = workspace / "feedback.json"
if feedback_path.exists():
try:
data = json.loads(feedback_path.read_text())
feedback_map = {
r["run_id"]: r["feedback"]
for r in data.get("reviews", [])
if r.get("feedback", "").strip()
}
except (json.JSONDecodeError, OSError, KeyError):
pass
# Load runs (to get outputs)
prev_runs = find_runs(workspace)
for run in prev_runs:
result[run["id"]] = {
"feedback": feedback_map.get(run["id"], ""),
"outputs": run.get("outputs", []),
}
# Also add feedback for run_ids that had feedback but no matching run
for run_id, fb in feedback_map.items():
if run_id not in result:
result[run_id] = {"feedback": fb, "outputs": []}
return result
def generate_html(
runs: list[dict],
skill_name: str,
previous: dict[str, dict] | None = None,
benchmark: dict | None = None,
) -> str:
"""Generate the complete standalone HTML page with embedded data."""
template_path = Path(__file__).parent / "viewer.html"
template = template_path.read_text()
# Build previous_feedback and previous_outputs maps for the template
previous_feedback: dict[str, str] = {}
previous_outputs: dict[str, list[dict]] = {}
if previous:
for run_id, data in previous.items():
if data.get("feedback"):
previous_feedback[run_id] = data["feedback"]
if data.get("outputs"):
previous_outputs[run_id] = data["outputs"]
embedded = {
"skill_name": skill_name,
"runs": runs,
"previous_feedback": previous_feedback,
"previous_outputs": previous_outputs,
}
if benchmark:
embedded["benchmark"] = benchmark
data_json = json.dumps(embedded)
return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
# ---------------------------------------------------------------------------
# HTTP server (stdlib only, zero dependencies)
# ---------------------------------------------------------------------------
def _kill_port(port: int) -> None:
"""Kill any process listening on the given port."""
try:
result = subprocess.run(
["lsof", "-ti", f":{port}"],
capture_output=True, text=True, timeout=5,
)
for pid_str in result.stdout.strip().split("\n"):
if pid_str.strip():
try:
os.kill(int(pid_str.strip()), signal.SIGTERM)
except (ProcessLookupError, ValueError):
pass
if result.stdout.strip():
time.sleep(0.5)
except subprocess.TimeoutExpired:
pass
except FileNotFoundError:
print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
class ReviewHandler(BaseHTTPRequestHandler):
"""Serves the review HTML and handles feedback saves.
Regenerates the HTML on each page load so that refreshing the browser
picks up new eval outputs without restarting the server.
"""
def __init__(
self,
workspace: Path,
skill_name: str,
feedback_path: Path,
previous: dict[str, dict],
benchmark_path: Path | None,
*args,
**kwargs,
):
self.workspace = workspace
self.skill_name = skill_name
self.feedback_path = feedback_path
self.previous = previous
self.benchmark_path = benchmark_path
super().__init__(*args, **kwargs)
def do_GET(self) -> None:
if self.path == "/" or self.path == "/index.html":
# Regenerate HTML on each request (re-scans workspace for new outputs)
runs = find_runs(self.workspace)
benchmark = None
if self.benchmark_path and self.benchmark_path.exists():
try:
benchmark = json.loads(self.benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
html = generate_html(runs, self.skill_name, self.previous, benchmark)
content = html.encode("utf-8")
self.send_response(200)
self.send_header("Content-Type", "text/html; charset=utf-8")
self.send_header("Content-Length", str(len(content)))
self.end_headers()
self.wfile.write(content)
elif self.path == "/api/feedback":
data = b"{}"
if self.feedback_path.exists():
data = self.feedback_path.read_bytes()
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(data)))
self.end_headers()
self.wfile.write(data)
else:
self.send_error(404)
def do_POST(self) -> None:
if self.path == "/api/feedback":
length = int(self.headers.get("Content-Length", 0))
body = self.rfile.read(length)
try:
data = json.loads(body)
if not isinstance(data, dict) or "reviews" not in data:
raise ValueError("Expected JSON object with 'reviews' key")
self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
resp = b'{"ok":true}'
self.send_response(200)
except (json.JSONDecodeError, OSError, ValueError) as e:
resp = json.dumps({"error": str(e)}).encode()
self.send_response(500)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(resp)))
self.end_headers()
self.wfile.write(resp)
else:
self.send_error(404)
def log_message(self, format: str, *args: object) -> None:
# Suppress request logging to keep terminal clean
pass
def main() -> None:
parser = argparse.ArgumentParser(description="Generate and serve eval review")
parser.add_argument("workspace", type=Path, help="Path to workspace directory")
parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
parser.add_argument(
"--previous-workspace", type=Path, default=None,
help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
)
parser.add_argument(
"--benchmark", type=Path, default=None,
help="Path to benchmark.json to show in the Benchmark tab",
)
parser.add_argument(
"--static", "-s", type=Path, default=None,
help="Write standalone HTML to this path instead of starting a server",
)
args = parser.parse_args()
workspace = args.workspace.resolve()
if not workspace.is_dir():
print(f"Error: {workspace} is not a directory", file=sys.stderr)
sys.exit(1)
runs = find_runs(workspace)
if not runs:
print(f"No runs found in {workspace}", file=sys.stderr)
sys.exit(1)
skill_name = args.skill_name or workspace.name.replace("-workspace", "")
feedback_path = workspace / "feedback.json"
previous: dict[str, dict] = {}
if args.previous_workspace:
previous = load_previous_iteration(args.previous_workspace.resolve())
benchmark_path = args.benchmark.resolve() if args.benchmark else None
benchmark = None
if benchmark_path and benchmark_path.exists():
try:
benchmark = json.loads(benchmark_path.read_text())
except (json.JSONDecodeError, OSError):
pass
if args.static:
html = generate_html(runs, skill_name, previous, benchmark)
args.static.parent.mkdir(parents=True, exist_ok=True)
args.static.write_text(html)
print(f"\n Static viewer written to: {args.static}\n")
sys.exit(0)
# Kill any existing process on the target port
port = args.port
_kill_port(port)
handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
try:
server = HTTPServer(("127.0.0.1", port), handler)
except OSError:
# Port still in use after kill attempt — find a free one
server = HTTPServer(("127.0.0.1", 0), handler)
port = server.server_address[1]
url = f"http://localhost:{port}"
print(f"\n Eval Viewer")
print(f" ─────────────────────────────────")
print(f" URL: {url}")
print(f" Workspace: {workspace}")
print(f" Feedback: {feedback_path}")
if previous:
print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
if benchmark_path:
print(f" Benchmark: {benchmark_path}")
print(f"\n Press Ctrl+C to stop.\n")
webbrowser.open(url)
try:
server.serve_forever()
except KeyboardInterrupt:
print("\nStopped.")
server.server_close()
if __name__ == "__main__":
main()
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,430 @@
# JSON Schemas
This document defines the JSON schemas used by skill-creator.
---
## evals.json
Defines the evals for a skill. Located at `evals/evals.json` within the skill directory.
```json
{
"skill_name": "example-skill",
"evals": [
{
"id": 1,
"prompt": "User's example prompt",
"expected_output": "Description of expected result",
"files": ["evals/files/sample1.pdf"],
"expectations": [
"The output includes X",
"The skill used script Y"
]
}
]
}
```
**Fields:**
- `skill_name`: Name matching the skill's frontmatter
- `evals[].id`: Unique integer identifier
- `evals[].prompt`: The task to execute
- `evals[].expected_output`: Human-readable description of success
- `evals[].files`: Optional list of input file paths (relative to skill root)
- `evals[].expectations`: List of verifiable statements
---
## history.json
Tracks version progression in Improve mode. Located at workspace root.
```json
{
"started_at": "2026-01-15T10:30:00Z",
"skill_name": "pdf",
"current_best": "v2",
"iterations": [
{
"version": "v0",
"parent": null,
"expectation_pass_rate": 0.65,
"grading_result": "baseline",
"is_current_best": false
},
{
"version": "v1",
"parent": "v0",
"expectation_pass_rate": 0.75,
"grading_result": "won",
"is_current_best": false
},
{
"version": "v2",
"parent": "v1",
"expectation_pass_rate": 0.85,
"grading_result": "won",
"is_current_best": true
}
]
}
```
**Fields:**
- `started_at`: ISO timestamp of when improvement started
- `skill_name`: Name of the skill being improved
- `current_best`: Version identifier of the best performer
- `iterations[].version`: Version identifier (v0, v1, ...)
- `iterations[].parent`: Parent version this was derived from
- `iterations[].expectation_pass_rate`: Pass rate from grading
- `iterations[].grading_result`: "baseline", "won", "lost", or "tie"
- `iterations[].is_current_best`: Whether this is the current best version
---
## grading.json
Output from the grader agent. Located at `<run-dir>/grading.json`.
```json
{
"expectations": [
{
"text": "The output includes the name 'John Smith'",
"passed": true,
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
},
{
"text": "The spreadsheet has a SUM formula in cell B10",
"passed": false,
"evidence": "No spreadsheet was created. The output was a text file."
}
],
"summary": {
"passed": 2,
"failed": 1,
"total": 3,
"pass_rate": 0.67
},
"execution_metrics": {
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8
},
"total_tool_calls": 15,
"total_steps": 6,
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
},
"timing": {
"executor_duration_seconds": 165.0,
"grader_duration_seconds": 26.0,
"total_duration_seconds": 191.0
},
"claims": [
{
"claim": "The form has 12 fillable fields",
"type": "factual",
"verified": true,
"evidence": "Counted 12 fields in field_info.json"
}
],
"user_notes_summary": {
"uncertainties": ["Used 2023 data, may be stale"],
"needs_review": [],
"workarounds": ["Fell back to text overlay for non-fillable fields"]
},
"eval_feedback": {
"suggestions": [
{
"assertion": "The output includes the name 'John Smith'",
"reason": "A hallucinated document that mentions the name would also pass"
}
],
"overall": "Assertions check presence but not correctness."
}
}
```
**Fields:**
- `expectations[]`: Graded expectations with evidence
- `summary`: Aggregate pass/fail counts
- `execution_metrics`: Tool usage and output size (from executor's metrics.json)
- `timing`: Wall clock timing (from timing.json)
- `claims`: Extracted and verified claims from the output
- `user_notes_summary`: Issues flagged by the executor
- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising
---
## metrics.json
Output from the executor agent. Located at `<run-dir>/outputs/metrics.json`.
```json
{
"tool_calls": {
"Read": 5,
"Write": 2,
"Bash": 8,
"Edit": 1,
"Glob": 2,
"Grep": 0
},
"total_tool_calls": 18,
"total_steps": 6,
"files_created": ["filled_form.pdf", "field_values.json"],
"errors_encountered": 0,
"output_chars": 12450,
"transcript_chars": 3200
}
```
**Fields:**
- `tool_calls`: Count per tool type
- `total_tool_calls`: Sum of all tool calls
- `total_steps`: Number of major execution steps
- `files_created`: List of output files created
- `errors_encountered`: Number of errors during execution
- `output_chars`: Total character count of output files
- `transcript_chars`: Character count of transcript
---
## timing.json
Wall clock timing for a run. Located at `<run-dir>/timing.json`.
**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
```json
{
"total_tokens": 84852,
"duration_ms": 23332,
"total_duration_seconds": 23.3,
"executor_start": "2026-01-15T10:30:00Z",
"executor_end": "2026-01-15T10:32:45Z",
"executor_duration_seconds": 165.0,
"grader_start": "2026-01-15T10:32:46Z",
"grader_end": "2026-01-15T10:33:12Z",
"grader_duration_seconds": 26.0
}
```
---
## benchmark.json
Output from Benchmark mode. Located at `benchmarks/<timestamp>/benchmark.json`.
```json
{
"metadata": {
"skill_name": "pdf",
"skill_path": "/path/to/pdf",
"executor_model": "claude-sonnet-4-20250514",
"analyzer_model": "most-capable-model",
"timestamp": "2026-01-15T10:30:00Z",
"evals_run": [1, 2, 3],
"runs_per_configuration": 3
},
"runs": [
{
"eval_id": 1,
"eval_name": "Ocean",
"configuration": "with_skill",
"run_number": 1,
"result": {
"pass_rate": 0.85,
"passed": 6,
"failed": 1,
"total": 7,
"time_seconds": 42.5,
"tokens": 3800,
"tool_calls": 18,
"errors": 0
},
"expectations": [
{"text": "...", "passed": true, "evidence": "..."}
],
"notes": [
"Used 2023 data, may be stale",
"Fell back to text overlay for non-fillable fields"
]
}
],
"run_summary": {
"with_skill": {
"pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90},
"time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0},
"tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100}
},
"without_skill": {
"pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45},
"time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0},
"tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500}
},
"delta": {
"pass_rate": "+0.50",
"time_seconds": "+13.0",
"tokens": "+1700"
}
},
"notes": [
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
"Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent",
"Without-skill runs consistently fail on table extraction expectations",
"Skill adds 13s average execution time but improves pass rate by 50%"
]
}
```
**Fields:**
- `metadata`: Information about the benchmark run
- `skill_name`: Name of the skill
- `timestamp`: When the benchmark was run
- `evals_run`: List of eval names or IDs
- `runs_per_configuration`: Number of runs per config (e.g. 3)
- `runs[]`: Individual run results
- `eval_id`: Numeric eval identifier
- `eval_name`: Human-readable eval name (used as section header in the viewer)
- `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding)
- `run_number`: Integer run number (1, 2, 3...)
- `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors`
- `run_summary`: Statistical aggregates per configuration
- `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
- `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
- `notes`: Freeform observations from the analyzer
**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
---
## comparison.json
Output from blind comparator. Located at `<grading-dir>/comparison-N.json`.
```json
{
"winner": "A",
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
"rubric": {
"A": {
"content": {
"correctness": 5,
"completeness": 5,
"accuracy": 4
},
"structure": {
"organization": 4,
"formatting": 5,
"usability": 4
},
"content_score": 4.7,
"structure_score": 4.3,
"overall_score": 9.0
},
"B": {
"content": {
"correctness": 3,
"completeness": 2,
"accuracy": 3
},
"structure": {
"organization": 3,
"formatting": 2,
"usability": 3
},
"content_score": 2.7,
"structure_score": 2.7,
"overall_score": 5.4
}
},
"output_quality": {
"A": {
"score": 9,
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
"weaknesses": ["Minor style inconsistency in header"]
},
"B": {
"score": 5,
"strengths": ["Readable output", "Correct basic structure"],
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
}
},
"expectation_results": {
"A": {
"passed": 4,
"total": 5,
"pass_rate": 0.80,
"details": [
{"text": "Output includes name", "passed": true}
]
},
"B": {
"passed": 3,
"total": 5,
"pass_rate": 0.60,
"details": [
{"text": "Output includes name", "passed": true}
]
}
}
}
```
---
## analysis.json
Output from post-hoc analyzer. Located at `<grading-dir>/analysis.json`.
```json
{
"comparison_summary": {
"winner": "A",
"winner_skill": "path/to/winner/skill",
"loser_skill": "path/to/loser/skill",
"comparator_reasoning": "Brief summary of why comparator chose winner"
},
"winner_strengths": [
"Clear step-by-step instructions for handling multi-page documents",
"Included validation script that caught formatting errors"
],
"loser_weaknesses": [
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
"No script for validation, agent had to improvise"
],
"instruction_following": {
"winner": {
"score": 9,
"issues": ["Minor: skipped optional logging step"]
},
"loser": {
"score": 6,
"issues": [
"Did not use the skill's formatting template",
"Invented own approach instead of following step 3"
]
}
},
"improvement_suggestions": [
{
"priority": "high",
"category": "instructions",
"suggestion": "Replace 'process the document appropriately' with explicit steps",
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
}
],
"transcript_insights": {
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script",
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods"
}
}
```
+401
View File
@@ -0,0 +1,401 @@
#!/usr/bin/env python3
"""
Aggregate individual run results into benchmark summary statistics.
Reads grading.json files from run directories and produces:
- run_summary with mean, stddev, min, max for each metric
- delta between with_skill and without_skill configurations
Usage:
python aggregate_benchmark.py <benchmark_dir>
Example:
python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/
The script supports two directory layouts:
Workspace layout (from skill-creator iterations):
<benchmark_dir>/
└── eval-N/
├── with_skill/
│ ├── run-1/grading.json
│ └── run-2/grading.json
└── without_skill/
├── run-1/grading.json
└── run-2/grading.json
Legacy layout (with runs/ subdirectory):
<benchmark_dir>/
└── runs/
└── eval-N/
├── with_skill/
│ └── run-1/grading.json
└── without_skill/
└── run-1/grading.json
"""
import argparse
import json
import math
import sys
from datetime import datetime, timezone
from pathlib import Path
def calculate_stats(values: list[float]) -> dict:
"""Calculate mean, stddev, min, max for a list of values."""
if not values:
return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}
n = len(values)
mean = sum(values) / n
if n > 1:
variance = sum((x - mean) ** 2 for x in values) / (n - 1)
stddev = math.sqrt(variance)
else:
stddev = 0.0
return {
"mean": round(mean, 4),
"stddev": round(stddev, 4),
"min": round(min(values), 4),
"max": round(max(values), 4)
}
def load_run_results(benchmark_dir: Path) -> dict:
"""
Load all run results from a benchmark directory.
Returns dict keyed by config name (e.g. "with_skill"/"without_skill",
or "new_skill"/"old_skill"), each containing a list of run results.
"""
# Support both layouts: eval dirs directly under benchmark_dir, or under runs/
runs_dir = benchmark_dir / "runs"
if runs_dir.exists():
search_dir = runs_dir
elif list(benchmark_dir.glob("eval-*")):
search_dir = benchmark_dir
else:
print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}")
return {}
results: dict[str, list] = {}
for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))):
metadata_path = eval_dir / "eval_metadata.json"
if metadata_path.exists():
try:
with open(metadata_path) as mf:
eval_id = json.load(mf).get("eval_id", eval_idx)
except (json.JSONDecodeError, OSError):
eval_id = eval_idx
else:
try:
eval_id = int(eval_dir.name.split("-")[1])
except ValueError:
eval_id = eval_idx
# Discover config directories dynamically rather than hardcoding names
for config_dir in sorted(eval_dir.iterdir()):
if not config_dir.is_dir():
continue
# Skip non-config directories (inputs, outputs, etc.)
if not list(config_dir.glob("run-*")):
continue
config = config_dir.name
if config not in results:
results[config] = []
for run_dir in sorted(config_dir.glob("run-*")):
run_number = int(run_dir.name.split("-")[1])
grading_file = run_dir / "grading.json"
if not grading_file.exists():
print(f"Warning: grading.json not found in {run_dir}")
continue
try:
with open(grading_file) as f:
grading = json.load(f)
except json.JSONDecodeError as e:
print(f"Warning: Invalid JSON in {grading_file}: {e}")
continue
# Extract metrics
result = {
"eval_id": eval_id,
"run_number": run_number,
"pass_rate": grading.get("summary", {}).get("pass_rate", 0.0),
"passed": grading.get("summary", {}).get("passed", 0),
"failed": grading.get("summary", {}).get("failed", 0),
"total": grading.get("summary", {}).get("total", 0),
}
# Extract timing — check grading.json first, then sibling timing.json
timing = grading.get("timing", {})
result["time_seconds"] = timing.get("total_duration_seconds", 0.0)
timing_file = run_dir / "timing.json"
if result["time_seconds"] == 0.0 and timing_file.exists():
try:
with open(timing_file) as tf:
timing_data = json.load(tf)
result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0)
result["tokens"] = timing_data.get("total_tokens", 0)
except json.JSONDecodeError:
pass
# Extract metrics if available
metrics = grading.get("execution_metrics", {})
result["tool_calls"] = metrics.get("total_tool_calls", 0)
if not result.get("tokens"):
result["tokens"] = metrics.get("output_chars", 0)
result["errors"] = metrics.get("errors_encountered", 0)
# Extract expectations — viewer requires fields: text, passed, evidence
raw_expectations = grading.get("expectations", [])
for exp in raw_expectations:
if "text" not in exp or "passed" not in exp:
print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}")
result["expectations"] = raw_expectations
# Extract notes from user_notes_summary
notes_summary = grading.get("user_notes_summary", {})
notes = []
notes.extend(notes_summary.get("uncertainties", []))
notes.extend(notes_summary.get("needs_review", []))
notes.extend(notes_summary.get("workarounds", []))
result["notes"] = notes
results[config].append(result)
return results
def aggregate_results(results: dict) -> dict:
"""
Aggregate run results into summary statistics.
Returns run_summary with stats for each configuration and delta.
"""
run_summary = {}
configs = list(results.keys())
for config in configs:
runs = results.get(config, [])
if not runs:
run_summary[config] = {
"pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
"time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
"tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0}
}
continue
pass_rates = [r["pass_rate"] for r in runs]
times = [r["time_seconds"] for r in runs]
tokens = [r.get("tokens", 0) for r in runs]
run_summary[config] = {
"pass_rate": calculate_stats(pass_rates),
"time_seconds": calculate_stats(times),
"tokens": calculate_stats(tokens)
}
# Calculate delta between the first two configs (if two exist)
if len(configs) >= 2:
primary = run_summary.get(configs[0], {})
baseline = run_summary.get(configs[1], {})
else:
primary = run_summary.get(configs[0], {}) if configs else {}
baseline = {}
delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0)
delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0)
delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0)
run_summary["delta"] = {
"pass_rate": f"{delta_pass_rate:+.2f}",
"time_seconds": f"{delta_time:+.1f}",
"tokens": f"{delta_tokens:+.0f}"
}
return run_summary
def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict:
"""
Generate complete benchmark.json from run results.
"""
results = load_run_results(benchmark_dir)
run_summary = aggregate_results(results)
# Build runs array for benchmark.json
runs = []
for config in results:
for result in results[config]:
runs.append({
"eval_id": result["eval_id"],
"configuration": config,
"run_number": result["run_number"],
"result": {
"pass_rate": result["pass_rate"],
"passed": result["passed"],
"failed": result["failed"],
"total": result["total"],
"time_seconds": result["time_seconds"],
"tokens": result.get("tokens", 0),
"tool_calls": result.get("tool_calls", 0),
"errors": result.get("errors", 0)
},
"expectations": result["expectations"],
"notes": result["notes"]
})
# Determine eval IDs from results
eval_ids = sorted(set(
r["eval_id"]
for config in results.values()
for r in config
))
benchmark = {
"metadata": {
"skill_name": skill_name or "<skill-name>",
"skill_path": skill_path or "<path/to/skill>",
"executor_model": "<model-name>",
"analyzer_model": "<model-name>",
"timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
"evals_run": eval_ids,
"runs_per_configuration": 3
},
"runs": runs,
"run_summary": run_summary,
"notes": [] # To be filled by analyzer
}
return benchmark
def generate_markdown(benchmark: dict) -> str:
"""Generate human-readable benchmark.md from benchmark data."""
metadata = benchmark["metadata"]
run_summary = benchmark["run_summary"]
# Determine config names (excluding "delta")
configs = [k for k in run_summary if k != "delta"]
config_a = configs[0] if len(configs) >= 1 else "config_a"
config_b = configs[1] if len(configs) >= 2 else "config_b"
label_a = config_a.replace("_", " ").title()
label_b = config_b.replace("_", " ").title()
lines = [
f"# Skill Benchmark: {metadata['skill_name']}",
"",
f"**Model**: {metadata['executor_model']}",
f"**Date**: {metadata['timestamp']}",
f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)",
"",
"## Summary",
"",
f"| Metric | {label_a} | {label_b} | Delta |",
"|--------|------------|---------------|-------|",
]
a_summary = run_summary.get(config_a, {})
b_summary = run_summary.get(config_b, {})
delta = run_summary.get("delta", {})
# Format pass rate
a_pr = a_summary.get("pass_rate", {})
b_pr = b_summary.get("pass_rate", {})
lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '')} |")
# Format time
a_time = a_summary.get("time_seconds", {})
b_time = b_summary.get("time_seconds", {})
lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '')}s |")
# Format tokens
a_tokens = a_summary.get("tokens", {})
b_tokens = b_summary.get("tokens", {})
lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '')} |")
# Notes section
if benchmark.get("notes"):
lines.extend([
"",
"## Notes",
""
])
for note in benchmark["notes"]:
lines.append(f"- {note}")
return "\n".join(lines)
def main():
parser = argparse.ArgumentParser(
description="Aggregate benchmark run results into summary statistics"
)
parser.add_argument(
"benchmark_dir",
type=Path,
help="Path to the benchmark directory"
)
parser.add_argument(
"--skill-name",
default="",
help="Name of the skill being benchmarked"
)
parser.add_argument(
"--skill-path",
default="",
help="Path to the skill being benchmarked"
)
parser.add_argument(
"--output", "-o",
type=Path,
help="Output path for benchmark.json (default: <benchmark_dir>/benchmark.json)"
)
args = parser.parse_args()
if not args.benchmark_dir.exists():
print(f"Directory not found: {args.benchmark_dir}")
sys.exit(1)
# Generate benchmark
benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path)
# Determine output paths
output_json = args.output or (args.benchmark_dir / "benchmark.json")
output_md = output_json.with_suffix(".md")
# Write benchmark.json
with open(output_json, "w") as f:
json.dump(benchmark, f, indent=2)
print(f"Generated: {output_json}")
# Write benchmark.md
markdown = generate_markdown(benchmark)
with open(output_md, "w") as f:
f.write(markdown)
print(f"Generated: {output_md}")
# Print summary
run_summary = benchmark["run_summary"]
configs = [k for k in run_summary if k != "delta"]
delta = run_summary.get("delta", {})
print(f"\nSummary:")
for config in configs:
pr = run_summary[config]["pass_rate"]["mean"]
label = config.replace("_", " ").title()
print(f" {label}: {pr*100:.1f}% pass rate")
print(f" Delta: {delta.get('pass_rate', '')}")
if __name__ == "__main__":
main()
+326
View File
@@ -0,0 +1,326 @@
#!/usr/bin/env python3
"""Generate an HTML report from run_loop.py output.
Takes the JSON output from run_loop.py and generates a visual HTML report
showing each description attempt with check/x for each test case.
Distinguishes between train and test queries.
"""
import argparse
import html
import json
import sys
from pathlib import Path
def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str:
"""Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag."""
history = data.get("history", [])
holdout = data.get("holdout", 0)
title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else ""
# Get all unique queries from train and test sets, with should_trigger info
train_queries: list[dict] = []
test_queries: list[dict] = []
if history:
for r in history[0].get("train_results", history[0].get("results", [])):
train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
if history[0].get("test_results"):
for r in history[0].get("test_results", []):
test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
refresh_tag = ' <meta http-equiv="refresh" content="5">\n' if auto_refresh else ""
html_parts = ["""<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
""" + refresh_tag + """ <title>""" + title_prefix + """Skill Description Optimization</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Lora', Georgia, serif;
max-width: 100%;
margin: 0 auto;
padding: 20px;
background: #faf9f5;
color: #141413;
}
h1 { font-family: 'Poppins', sans-serif; color: #141413; }
.explainer {
background: white;
padding: 15px;
border-radius: 6px;
margin-bottom: 20px;
border: 1px solid #e8e6dc;
color: #b0aea5;
font-size: 0.875rem;
line-height: 1.6;
}
.summary {
background: white;
padding: 15px;
border-radius: 6px;
margin-bottom: 20px;
border: 1px solid #e8e6dc;
}
.summary p { margin: 5px 0; }
.best { color: #788c5d; font-weight: bold; }
.table-container {
overflow-x: auto;
width: 100%;
}
table {
border-collapse: collapse;
background: white;
border: 1px solid #e8e6dc;
border-radius: 6px;
font-size: 12px;
min-width: 100%;
}
th, td {
padding: 8px;
text-align: left;
border: 1px solid #e8e6dc;
white-space: normal;
word-wrap: break-word;
}
th {
font-family: 'Poppins', sans-serif;
background: #141413;
color: #faf9f5;
font-weight: 500;
}
th.test-col {
background: #6a9bcc;
}
th.query-col { min-width: 200px; }
td.description {
font-family: monospace;
font-size: 11px;
word-wrap: break-word;
max-width: 400px;
}
td.result {
text-align: center;
font-size: 16px;
min-width: 40px;
}
td.test-result {
background: #f0f6fc;
}
.pass { color: #788c5d; }
.fail { color: #c44; }
.rate {
font-size: 9px;
color: #b0aea5;
display: block;
}
tr:hover { background: #faf9f5; }
.score {
display: inline-block;
padding: 2px 6px;
border-radius: 4px;
font-weight: bold;
font-size: 11px;
}
.score-good { background: #eef2e8; color: #788c5d; }
.score-ok { background: #fef3c7; color: #d97706; }
.score-bad { background: #fceaea; color: #c44; }
.train-label { color: #b0aea5; font-size: 10px; }
.test-label { color: #6a9bcc; font-size: 10px; font-weight: bold; }
.best-row { background: #f5f8f2; }
th.positive-col { border-bottom: 3px solid #788c5d; }
th.negative-col { border-bottom: 3px solid #c44; }
th.test-col.positive-col { border-bottom: 3px solid #788c5d; }
th.test-col.negative-col { border-bottom: 3px solid #c44; }
.legend { font-family: 'Poppins', sans-serif; display: flex; gap: 20px; margin-bottom: 10px; font-size: 13px; align-items: center; }
.legend-item { display: flex; align-items: center; gap: 6px; }
.legend-swatch { width: 16px; height: 16px; border-radius: 3px; display: inline-block; }
.swatch-positive { background: #141413; border-bottom: 3px solid #788c5d; }
.swatch-negative { background: #141413; border-bottom: 3px solid #c44; }
.swatch-test { background: #6a9bcc; }
.swatch-train { background: #141413; }
</style>
</head>
<body>
<h1>""" + title_prefix + """Skill Description Optimization</h1>
<div class="explainer">
<strong>Optimizing your skill's description.</strong> This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill.
</div>
"""]
# Summary section
best_test_score = data.get('best_test_score')
best_train_score = data.get('best_train_score')
html_parts.append(f"""
<div class="summary">
<p><strong>Original:</strong> {html.escape(data.get('original_description', 'N/A'))}</p>
<p class="best"><strong>Best:</strong> {html.escape(data.get('best_description', 'N/A'))}</p>
<p><strong>Best Score:</strong> {data.get('best_score', 'N/A')} {'(test)' if best_test_score else '(train)'}</p>
<p><strong>Iterations:</strong> {data.get('iterations_run', 0)} | <strong>Train:</strong> {data.get('train_size', '?')} | <strong>Test:</strong> {data.get('test_size', '?')}</p>
</div>
""")
# Legend
html_parts.append("""
<div class="legend">
<span style="font-weight:600">Query columns:</span>
<span class="legend-item"><span class="legend-swatch swatch-positive"></span> Should trigger</span>
<span class="legend-item"><span class="legend-swatch swatch-negative"></span> Should NOT trigger</span>
<span class="legend-item"><span class="legend-swatch swatch-train"></span> Train</span>
<span class="legend-item"><span class="legend-swatch swatch-test"></span> Test</span>
</div>
""")
# Table header
html_parts.append("""
<div class="table-container">
<table>
<thead>
<tr>
<th>Iter</th>
<th>Train</th>
<th>Test</th>
<th class="query-col">Description</th>
""")
# Add column headers for train queries
for qinfo in train_queries:
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
html_parts.append(f' <th class="{polarity}">{html.escape(qinfo["query"])}</th>\n')
# Add column headers for test queries (different color)
for qinfo in test_queries:
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
html_parts.append(f' <th class="test-col {polarity}">{html.escape(qinfo["query"])}</th>\n')
html_parts.append(""" </tr>
</thead>
<tbody>
""")
# Find best iteration for highlighting
if test_queries:
best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration")
else:
best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration")
# Add rows for each iteration
for h in history:
iteration = h.get("iteration", "?")
train_passed = h.get("train_passed", h.get("passed", 0))
train_total = h.get("train_total", h.get("total", 0))
test_passed = h.get("test_passed")
test_total = h.get("test_total")
description = h.get("description", "")
train_results = h.get("train_results", h.get("results", []))
test_results = h.get("test_results", [])
# Create lookups for results by query
train_by_query = {r["query"]: r for r in train_results}
test_by_query = {r["query"]: r for r in test_results} if test_results else {}
# Compute aggregate correct/total runs across all retries
def aggregate_runs(results: list[dict]) -> tuple[int, int]:
correct = 0
total = 0
for r in results:
runs = r.get("runs", 0)
triggers = r.get("triggers", 0)
total += runs
if r.get("should_trigger", True):
correct += triggers
else:
correct += runs - triggers
return correct, total
train_correct, train_runs = aggregate_runs(train_results)
test_correct, test_runs = aggregate_runs(test_results)
# Determine score classes
def score_class(correct: int, total: int) -> str:
if total > 0:
ratio = correct / total
if ratio >= 0.8:
return "score-good"
elif ratio >= 0.5:
return "score-ok"
return "score-bad"
train_class = score_class(train_correct, train_runs)
test_class = score_class(test_correct, test_runs)
row_class = "best-row" if iteration == best_iter else ""
html_parts.append(f""" <tr class="{row_class}">
<td>{iteration}</td>
<td><span class="score {train_class}">{train_correct}/{train_runs}</span></td>
<td><span class="score {test_class}">{test_correct}/{test_runs}</span></td>
<td class="description">{html.escape(description)}</td>
""")
# Add result for each train query
for qinfo in train_queries:
r = train_by_query.get(qinfo["query"], {})
did_pass = r.get("pass", False)
triggers = r.get("triggers", 0)
runs = r.get("runs", 0)
icon = "" if did_pass else ""
css_class = "pass" if did_pass else "fail"
html_parts.append(f' <td class="result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
# Add result for each test query (with different background)
for qinfo in test_queries:
r = test_by_query.get(qinfo["query"], {})
did_pass = r.get("pass", False)
triggers = r.get("triggers", 0)
runs = r.get("runs", 0)
icon = "" if did_pass else ""
css_class = "pass" if did_pass else "fail"
html_parts.append(f' <td class="result test-result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
html_parts.append(" </tr>\n")
html_parts.append(""" </tbody>
</table>
</div>
""")
html_parts.append("""
</body>
</html>
""")
return "".join(html_parts)
def main():
parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output")
parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)")
parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)")
parser.add_argument("--skill-name", default="", help="Skill name to include in the report title")
args = parser.parse_args()
if args.input == "-":
data = json.load(sys.stdin)
else:
data = json.loads(Path(args.input).read_text())
html_output = generate_html(data, skill_name=args.skill_name)
if args.output:
Path(args.output).write_text(html_output)
print(f"Report written to {args.output}", file=sys.stderr)
else:
print(html_output)
if __name__ == "__main__":
main()
+247
View File
@@ -0,0 +1,247 @@
#!/usr/bin/env python3
"""Improve a skill description based on eval results.
Takes eval results (from run_eval.py) and generates an improved description
by calling `claude -p` as a subprocess (same auth pattern as run_eval.py —
uses the session's Claude Code auth, no separate ANTHROPIC_API_KEY needed).
"""
import argparse
import json
import os
import re
import subprocess
import sys
from pathlib import Path
from scripts.utils import parse_skill_md
def _call_claude(prompt: str, model: str | None, timeout: int = 300) -> str:
"""Run `claude -p` with the prompt on stdin and return the text response.
Prompt goes over stdin (not argv) because it embeds the full SKILL.md
body and can easily exceed comfortable argv length.
"""
cmd = ["claude", "-p", "--output-format", "text"]
if model:
cmd.extend(["--model", model])
# Remove CLAUDECODE env var to allow nesting claude -p inside a
# Claude Code session. The guard is for interactive terminal conflicts;
# programmatic subprocess usage is safe. Same pattern as run_eval.py.
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
result = subprocess.run(
cmd,
input=prompt,
capture_output=True,
text=True,
env=env,
timeout=timeout,
)
if result.returncode != 0:
raise RuntimeError(
f"claude -p exited {result.returncode}\nstderr: {result.stderr}"
)
return result.stdout
def improve_description(
skill_name: str,
skill_content: str,
current_description: str,
eval_results: dict,
history: list[dict],
model: str,
test_results: dict | None = None,
log_dir: Path | None = None,
iteration: int | None = None,
) -> str:
"""Call Claude to improve the description based on eval results."""
failed_triggers = [
r for r in eval_results["results"]
if r["should_trigger"] and not r["pass"]
]
false_triggers = [
r for r in eval_results["results"]
if not r["should_trigger"] and not r["pass"]
]
# Build scores summary
train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}"
if test_results:
test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}"
scores_summary = f"Train: {train_score}, Test: {test_score}"
else:
scores_summary = f"Train: {train_score}"
prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones.
Here's the current description:
<current_description>
"{current_description}"
</current_description>
Current scores ({scores_summary}):
<scores_summary>
"""
if failed_triggers:
prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n"
for r in failed_triggers:
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
prompt += "\n"
if false_triggers:
prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n"
for r in false_triggers:
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
prompt += "\n"
if history:
prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n"
for h in history:
train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}"
test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None
score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "")
prompt += f'<attempt {score_str}>\n'
prompt += f'Description: "{h["description"]}"\n'
if "results" in h:
prompt += "Train results:\n"
for r in h["results"]:
status = "PASS" if r["pass"] else "FAIL"
prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n'
if h.get("note"):
prompt += f'Note: {h["note"]}\n'
prompt += "</attempt>\n\n"
prompt += f"""</scores_summary>
Skill content (for context on what the skill does):
<skill_content>
{skill_content}
</skill_content>
Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
1. Avoid overfitting
2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description.
Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. There is a hard limit of 1024 characters — descriptions over that will be truncated, so stay comfortably under it.
Here are some tips that we've found to work well in writing these descriptions:
- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does"
- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable.
- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end.
Please respond with only the new description text in <new_description> tags, nothing else."""
text = _call_claude(prompt, model)
match = re.search(r"<new_description>(.*?)</new_description>", text, re.DOTALL)
description = match.group(1).strip().strip('"') if match else text.strip().strip('"')
transcript: dict = {
"iteration": iteration,
"prompt": prompt,
"response": text,
"parsed_description": description,
"char_count": len(description),
"over_limit": len(description) > 1024,
}
# Safety net: the prompt already states the 1024-char hard limit, but if
# the model blew past it anyway, make one fresh single-turn call that
# quotes the too-long version and asks for a shorter rewrite. (The old
# SDK path did this as a true multi-turn; `claude -p` is one-shot, so we
# inline the prior output into the new prompt instead.)
if len(description) > 1024:
shorten_prompt = (
f"{prompt}\n\n"
f"---\n\n"
f"A previous attempt produced this description, which at "
f"{len(description)} characters is over the 1024-character hard limit:\n\n"
f'"{description}"\n\n'
f"Rewrite it to be under 1024 characters while keeping the most "
f"important trigger words and intent coverage. Respond with only "
f"the new description in <new_description> tags."
)
shorten_text = _call_claude(shorten_prompt, model)
match = re.search(r"<new_description>(.*?)</new_description>", shorten_text, re.DOTALL)
shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"')
transcript["rewrite_prompt"] = shorten_prompt
transcript["rewrite_response"] = shorten_text
transcript["rewrite_description"] = shortened
transcript["rewrite_char_count"] = len(shortened)
description = shortened
transcript["final_description"] = description
if log_dir:
log_dir.mkdir(parents=True, exist_ok=True)
log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json"
log_file.write_text(json.dumps(transcript, indent=2))
return description
def main():
parser = argparse.ArgumentParser(description="Improve a skill description based on eval results")
parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)")
parser.add_argument("--model", required=True, help="Model for improvement")
parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr")
args = parser.parse_args()
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
eval_results = json.loads(Path(args.eval_results).read_text())
history = []
if args.history:
history = json.loads(Path(args.history).read_text())
name, _, content = parse_skill_md(skill_path)
current_description = eval_results["description"]
if args.verbose:
print(f"Current: {current_description}", file=sys.stderr)
print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr)
new_description = improve_description(
skill_name=name,
skill_content=content,
current_description=current_description,
eval_results=eval_results,
history=history,
model=args.model,
)
if args.verbose:
print(f"Improved: {new_description}", file=sys.stderr)
# Output as JSON with both the new description and updated history
output = {
"description": new_description,
"history": history + [{
"description": current_description,
"passed": eval_results["summary"]["passed"],
"failed": eval_results["summary"]["failed"],
"total": eval_results["summary"]["total"],
"results": eval_results["results"],
}],
}
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()
+136
View File
@@ -0,0 +1,136 @@
#!/usr/bin/env python3
"""
Skill Packager - Creates a distributable .skill file of a skill folder
Usage:
python utils/package_skill.py <path/to/skill-folder> [output-directory]
Example:
python utils/package_skill.py skills/public/my-skill
python utils/package_skill.py skills/public/my-skill ./dist
"""
import fnmatch
import sys
import zipfile
from pathlib import Path
from scripts.quick_validate import validate_skill
# Patterns to exclude when packaging skills.
EXCLUDE_DIRS = {"__pycache__", "node_modules"}
EXCLUDE_GLOBS = {"*.pyc"}
EXCLUDE_FILES = {".DS_Store"}
# Directories excluded only at the skill root (not when nested deeper).
ROOT_EXCLUDE_DIRS = {"evals"}
def should_exclude(rel_path: Path) -> bool:
"""Check if a path should be excluded from packaging."""
parts = rel_path.parts
if any(part in EXCLUDE_DIRS for part in parts):
return True
# rel_path is relative to skill_path.parent, so parts[0] is the skill
# folder name and parts[1] (if present) is the first subdir.
if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS:
return True
name = rel_path.name
if name in EXCLUDE_FILES:
return True
return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS)
def package_skill(skill_path, output_dir=None):
"""
Package a skill folder into a .skill file.
Args:
skill_path: Path to the skill folder
output_dir: Optional output directory for the .skill file (defaults to current directory)
Returns:
Path to the created .skill file, or None if error
"""
skill_path = Path(skill_path).resolve()
# Validate skill folder exists
if not skill_path.exists():
print(f"❌ Error: Skill folder not found: {skill_path}")
return None
if not skill_path.is_dir():
print(f"❌ Error: Path is not a directory: {skill_path}")
return None
# Validate SKILL.md exists
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
print(f"❌ Error: SKILL.md not found in {skill_path}")
return None
# Run validation before packaging
print("🔍 Validating skill...")
valid, message = validate_skill(skill_path)
if not valid:
print(f"❌ Validation failed: {message}")
print(" Please fix the validation errors before packaging.")
return None
print(f"{message}\n")
# Determine output location
skill_name = skill_path.name
if output_dir:
output_path = Path(output_dir).resolve()
output_path.mkdir(parents=True, exist_ok=True)
else:
output_path = Path.cwd()
skill_filename = output_path / f"{skill_name}.skill"
# Create the .skill file (zip format)
try:
with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
# Walk through the skill directory, excluding build artifacts
for file_path in skill_path.rglob('*'):
if not file_path.is_file():
continue
arcname = file_path.relative_to(skill_path.parent)
if should_exclude(arcname):
print(f" Skipped: {arcname}")
continue
zipf.write(file_path, arcname)
print(f" Added: {arcname}")
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
return skill_filename
except Exception as e:
print(f"❌ Error creating .skill file: {e}")
return None
def main():
if len(sys.argv) < 2:
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
print("\nExample:")
print(" python utils/package_skill.py skills/public/my-skill")
print(" python utils/package_skill.py skills/public/my-skill ./dist")
sys.exit(1)
skill_path = sys.argv[1]
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
print(f"📦 Packaging skill: {skill_path}")
if output_dir:
print(f" Output directory: {output_dir}")
print()
result = package_skill(skill_path, output_dir)
if result:
sys.exit(0)
else:
sys.exit(1)
if __name__ == "__main__":
main()
+103
View File
@@ -0,0 +1,103 @@
#!/usr/bin/env python3
"""
Quick validation script for skills - minimal version
"""
import sys
import os
import re
import yaml
from pathlib import Path
def validate_skill(skill_path):
"""Basic validation of a skill"""
skill_path = Path(skill_path)
# Check SKILL.md exists
skill_md = skill_path / 'SKILL.md'
if not skill_md.exists():
return False, "SKILL.md not found"
# Read and validate frontmatter
content = skill_md.read_text()
if not content.startswith('---'):
return False, "No YAML frontmatter found"
# Extract frontmatter
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
if not match:
return False, "Invalid frontmatter format"
frontmatter_text = match.group(1)
# Parse YAML frontmatter
try:
frontmatter = yaml.safe_load(frontmatter_text)
if not isinstance(frontmatter, dict):
return False, "Frontmatter must be a YAML dictionary"
except yaml.YAMLError as e:
return False, f"Invalid YAML in frontmatter: {e}"
# Define allowed properties
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
# Check for unexpected properties (excluding nested keys under metadata)
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
if unexpected_keys:
return False, (
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
)
# Check required fields
if 'name' not in frontmatter:
return False, "Missing 'name' in frontmatter"
if 'description' not in frontmatter:
return False, "Missing 'description' in frontmatter"
# Extract name for validation
name = frontmatter.get('name', '')
if not isinstance(name, str):
return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
# Check naming convention (kebab-case: lowercase with hyphens)
if not re.match(r'^[a-z0-9-]+$', name):
return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
if name.startswith('-') or name.endswith('-') or '--' in name:
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
# Check name length (max 64 characters per spec)
if len(name) > 64:
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
# Extract and validate description
description = frontmatter.get('description', '')
if not isinstance(description, str):
return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
# Check for angle brackets
if '<' in description or '>' in description:
return False, "Description cannot contain angle brackets (< or >)"
# Check description length (max 1024 characters per spec)
if len(description) > 1024:
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
# Validate compatibility field if present (optional)
compatibility = frontmatter.get('compatibility', '')
if compatibility:
if not isinstance(compatibility, str):
return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
if len(compatibility) > 500:
return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
return True, "Skill is valid!"
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python quick_validate.py <skill_directory>")
sys.exit(1)
valid, message = validate_skill(sys.argv[1])
print(message)
sys.exit(0 if valid else 1)
+310
View File
@@ -0,0 +1,310 @@
#!/usr/bin/env python3
"""Run trigger evaluation for a skill description.
Tests whether a skill's description causes Claude to trigger (read the skill)
for a set of queries. Outputs results as JSON.
"""
import argparse
import json
import os
import select
import subprocess
import sys
import time
import uuid
from concurrent.futures import ProcessPoolExecutor, as_completed
from pathlib import Path
from scripts.utils import parse_skill_md
def find_project_root() -> Path:
"""Find the project root by walking up from cwd looking for .claude/.
Mimics how Claude Code discovers its project root, so the command file
we create ends up where claude -p will look for it.
"""
current = Path.cwd()
for parent in [current, *current.parents]:
if (parent / ".claude").is_dir():
return parent
return current
def run_single_query(
query: str,
skill_name: str,
skill_description: str,
timeout: int,
project_root: str,
model: str | None = None,
) -> bool:
"""Run a single query and return whether the skill was triggered.
Creates a command file in .claude/commands/ so it appears in Claude's
available_skills list, then runs `claude -p` with the raw query.
Uses --include-partial-messages to detect triggering early from
stream events (content_block_start) rather than waiting for the
full assistant message, which only arrives after tool execution.
"""
unique_id = uuid.uuid4().hex[:8]
clean_name = f"{skill_name}-skill-{unique_id}"
project_commands_dir = Path(project_root) / ".claude" / "commands"
command_file = project_commands_dir / f"{clean_name}.md"
try:
project_commands_dir.mkdir(parents=True, exist_ok=True)
# Use YAML block scalar to avoid breaking on quotes in description
indented_desc = "\n ".join(skill_description.split("\n"))
command_content = (
f"---\n"
f"description: |\n"
f" {indented_desc}\n"
f"---\n\n"
f"# {skill_name}\n\n"
f"This skill handles: {skill_description}\n"
)
command_file.write_text(command_content)
cmd = [
"claude",
"-p", query,
"--output-format", "stream-json",
"--verbose",
"--include-partial-messages",
]
if model:
cmd.extend(["--model", model])
# Remove CLAUDECODE env var to allow nesting claude -p inside a
# Claude Code session. The guard is for interactive terminal conflicts;
# programmatic subprocess usage is safe.
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
cwd=project_root,
env=env,
)
triggered = False
start_time = time.time()
buffer = ""
# Track state for stream event detection
pending_tool_name = None
accumulated_json = ""
try:
while time.time() - start_time < timeout:
if process.poll() is not None:
remaining = process.stdout.read()
if remaining:
buffer += remaining.decode("utf-8", errors="replace")
break
ready, _, _ = select.select([process.stdout], [], [], 1.0)
if not ready:
continue
chunk = os.read(process.stdout.fileno(), 8192)
if not chunk:
break
buffer += chunk.decode("utf-8", errors="replace")
while "\n" in buffer:
line, buffer = buffer.split("\n", 1)
line = line.strip()
if not line:
continue
try:
event = json.loads(line)
except json.JSONDecodeError:
continue
# Early detection via stream events
if event.get("type") == "stream_event":
se = event.get("event", {})
se_type = se.get("type", "")
if se_type == "content_block_start":
cb = se.get("content_block", {})
if cb.get("type") == "tool_use":
tool_name = cb.get("name", "")
if tool_name in ("Skill", "Read"):
pending_tool_name = tool_name
accumulated_json = ""
else:
return False
elif se_type == "content_block_delta" and pending_tool_name:
delta = se.get("delta", {})
if delta.get("type") == "input_json_delta":
accumulated_json += delta.get("partial_json", "")
if clean_name in accumulated_json:
return True
elif se_type in ("content_block_stop", "message_stop"):
if pending_tool_name:
return clean_name in accumulated_json
if se_type == "message_stop":
return False
# Fallback: full assistant message
elif event.get("type") == "assistant":
message = event.get("message", {})
for content_item in message.get("content", []):
if content_item.get("type") != "tool_use":
continue
tool_name = content_item.get("name", "")
tool_input = content_item.get("input", {})
if tool_name == "Skill" and clean_name in tool_input.get("skill", ""):
triggered = True
elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""):
triggered = True
return triggered
elif event.get("type") == "result":
return triggered
finally:
# Clean up process on any exit path (return, exception, timeout)
if process.poll() is None:
process.kill()
process.wait()
return triggered
finally:
if command_file.exists():
command_file.unlink()
def run_eval(
eval_set: list[dict],
skill_name: str,
description: str,
num_workers: int,
timeout: int,
project_root: Path,
runs_per_query: int = 1,
trigger_threshold: float = 0.5,
model: str | None = None,
) -> dict:
"""Run the full eval set and return results."""
results = []
with ProcessPoolExecutor(max_workers=num_workers) as executor:
future_to_info = {}
for item in eval_set:
for run_idx in range(runs_per_query):
future = executor.submit(
run_single_query,
item["query"],
skill_name,
description,
timeout,
str(project_root),
model,
)
future_to_info[future] = (item, run_idx)
query_triggers: dict[str, list[bool]] = {}
query_items: dict[str, dict] = {}
for future in as_completed(future_to_info):
item, _ = future_to_info[future]
query = item["query"]
query_items[query] = item
if query not in query_triggers:
query_triggers[query] = []
try:
query_triggers[query].append(future.result())
except Exception as e:
print(f"Warning: query failed: {e}", file=sys.stderr)
query_triggers[query].append(False)
for query, triggers in query_triggers.items():
item = query_items[query]
trigger_rate = sum(triggers) / len(triggers)
should_trigger = item["should_trigger"]
if should_trigger:
did_pass = trigger_rate >= trigger_threshold
else:
did_pass = trigger_rate < trigger_threshold
results.append({
"query": query,
"should_trigger": should_trigger,
"trigger_rate": trigger_rate,
"triggers": sum(triggers),
"runs": len(triggers),
"pass": did_pass,
})
passed = sum(1 for r in results if r["pass"])
total = len(results)
return {
"skill_name": skill_name,
"description": description,
"results": results,
"summary": {
"total": total,
"passed": passed,
"failed": total - passed,
},
}
def main():
parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description")
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--description", default=None, help="Override description to test")
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)")
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
args = parser.parse_args()
eval_set = json.loads(Path(args.eval_set).read_text())
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
name, original_description, content = parse_skill_md(skill_path)
description = args.description or original_description
project_root = find_project_root()
if args.verbose:
print(f"Evaluating: {description}", file=sys.stderr)
output = run_eval(
eval_set=eval_set,
skill_name=name,
description=description,
num_workers=args.num_workers,
timeout=args.timeout,
project_root=project_root,
runs_per_query=args.runs_per_query,
trigger_threshold=args.trigger_threshold,
model=args.model,
)
if args.verbose:
summary = output["summary"]
print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr)
for r in output["results"]:
status = "PASS" if r["pass"] else "FAIL"
rate_str = f"{r['triggers']}/{r['runs']}"
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr)
print(json.dumps(output, indent=2))
if __name__ == "__main__":
main()
+328
View File
@@ -0,0 +1,328 @@
#!/usr/bin/env python3
"""Run the eval + improve loop until all pass or max iterations reached.
Combines run_eval.py and improve_description.py in a loop, tracking history
and returning the best description found. Supports train/test split to prevent
overfitting.
"""
import argparse
import json
import random
import sys
import tempfile
import time
import webbrowser
from pathlib import Path
from scripts.generate_report import generate_html
from scripts.improve_description import improve_description
from scripts.run_eval import find_project_root, run_eval
from scripts.utils import parse_skill_md
def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]:
"""Split eval set into train and test sets, stratified by should_trigger."""
random.seed(seed)
# Separate by should_trigger
trigger = [e for e in eval_set if e["should_trigger"]]
no_trigger = [e for e in eval_set if not e["should_trigger"]]
# Shuffle each group
random.shuffle(trigger)
random.shuffle(no_trigger)
# Calculate split points
n_trigger_test = max(1, int(len(trigger) * holdout))
n_no_trigger_test = max(1, int(len(no_trigger) * holdout))
# Split
test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test]
train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:]
return train_set, test_set
def run_loop(
eval_set: list[dict],
skill_path: Path,
description_override: str | None,
num_workers: int,
timeout: int,
max_iterations: int,
runs_per_query: int,
trigger_threshold: float,
holdout: float,
model: str,
verbose: bool,
live_report_path: Path | None = None,
log_dir: Path | None = None,
) -> dict:
"""Run the eval + improvement loop."""
project_root = find_project_root()
name, original_description, content = parse_skill_md(skill_path)
current_description = description_override or original_description
# Split into train/test if holdout > 0
if holdout > 0:
train_set, test_set = split_eval_set(eval_set, holdout)
if verbose:
print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr)
else:
train_set = eval_set
test_set = []
history = []
exit_reason = "unknown"
for iteration in range(1, max_iterations + 1):
if verbose:
print(f"\n{'='*60}", file=sys.stderr)
print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr)
print(f"Description: {current_description}", file=sys.stderr)
print(f"{'='*60}", file=sys.stderr)
# Evaluate train + test together in one batch for parallelism
all_queries = train_set + test_set
t0 = time.time()
all_results = run_eval(
eval_set=all_queries,
skill_name=name,
description=current_description,
num_workers=num_workers,
timeout=timeout,
project_root=project_root,
runs_per_query=runs_per_query,
trigger_threshold=trigger_threshold,
model=model,
)
eval_elapsed = time.time() - t0
# Split results back into train/test by matching queries
train_queries_set = {q["query"] for q in train_set}
train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set]
test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set]
train_passed = sum(1 for r in train_result_list if r["pass"])
train_total = len(train_result_list)
train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total}
train_results = {"results": train_result_list, "summary": train_summary}
if test_set:
test_passed = sum(1 for r in test_result_list if r["pass"])
test_total = len(test_result_list)
test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total}
test_results = {"results": test_result_list, "summary": test_summary}
else:
test_results = None
test_summary = None
history.append({
"iteration": iteration,
"description": current_description,
"train_passed": train_summary["passed"],
"train_failed": train_summary["failed"],
"train_total": train_summary["total"],
"train_results": train_results["results"],
"test_passed": test_summary["passed"] if test_summary else None,
"test_failed": test_summary["failed"] if test_summary else None,
"test_total": test_summary["total"] if test_summary else None,
"test_results": test_results["results"] if test_results else None,
# For backward compat with report generator
"passed": train_summary["passed"],
"failed": train_summary["failed"],
"total": train_summary["total"],
"results": train_results["results"],
})
# Write live report if path provided
if live_report_path:
partial_output = {
"original_description": original_description,
"best_description": current_description,
"best_score": "in progress",
"iterations_run": len(history),
"holdout": holdout,
"train_size": len(train_set),
"test_size": len(test_set),
"history": history,
}
live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name))
if verbose:
def print_eval_stats(label, results, elapsed):
pos = [r for r in results if r["should_trigger"]]
neg = [r for r in results if not r["should_trigger"]]
tp = sum(r["triggers"] for r in pos)
pos_runs = sum(r["runs"] for r in pos)
fn = pos_runs - tp
fp = sum(r["triggers"] for r in neg)
neg_runs = sum(r["runs"] for r in neg)
tn = neg_runs - fp
total = tp + tn + fp + fn
precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0
recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0
accuracy = (tp + tn) / total if total > 0 else 0.0
print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr)
for r in results:
status = "PASS" if r["pass"] else "FAIL"
rate_str = f"{r['triggers']}/{r['runs']}"
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr)
print_eval_stats("Train", train_results["results"], eval_elapsed)
if test_summary:
print_eval_stats("Test ", test_results["results"], 0)
if train_summary["failed"] == 0:
exit_reason = f"all_passed (iteration {iteration})"
if verbose:
print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr)
break
if iteration == max_iterations:
exit_reason = f"max_iterations ({max_iterations})"
if verbose:
print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr)
break
# Improve the description based on train results
if verbose:
print(f"\nImproving description...", file=sys.stderr)
t0 = time.time()
# Strip test scores from history so improvement model can't see them
blinded_history = [
{k: v for k, v in h.items() if not k.startswith("test_")}
for h in history
]
new_description = improve_description(
skill_name=name,
skill_content=content,
current_description=current_description,
eval_results=train_results,
history=blinded_history,
model=model,
log_dir=log_dir,
iteration=iteration,
)
improve_elapsed = time.time() - t0
if verbose:
print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr)
current_description = new_description
# Find the best iteration by TEST score (or train if no test set)
if test_set:
best = max(history, key=lambda h: h["test_passed"] or 0)
best_score = f"{best['test_passed']}/{best['test_total']}"
else:
best = max(history, key=lambda h: h["train_passed"])
best_score = f"{best['train_passed']}/{best['train_total']}"
if verbose:
print(f"\nExit reason: {exit_reason}", file=sys.stderr)
print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr)
return {
"exit_reason": exit_reason,
"original_description": original_description,
"best_description": best["description"],
"best_score": best_score,
"best_train_score": f"{best['train_passed']}/{best['train_total']}",
"best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None,
"final_description": current_description,
"iterations_run": len(history),
"holdout": holdout,
"train_size": len(train_set),
"test_size": len(test_set),
"history": history,
}
def main():
parser = argparse.ArgumentParser(description="Run eval + improve loop")
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
parser.add_argument("--description", default=None, help="Override starting description")
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations")
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)")
parser.add_argument("--model", required=True, help="Model for improvement")
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)")
parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here")
args = parser.parse_args()
eval_set = json.loads(Path(args.eval_set).read_text())
skill_path = Path(args.skill_path)
if not (skill_path / "SKILL.md").exists():
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
sys.exit(1)
name, _, _ = parse_skill_md(skill_path)
# Set up live report path
if args.report != "none":
if args.report == "auto":
timestamp = time.strftime("%Y%m%d_%H%M%S")
live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html"
else:
live_report_path = Path(args.report)
# Open the report immediately so the user can watch
live_report_path.write_text("<html><body><h1>Starting optimization loop...</h1><meta http-equiv='refresh' content='5'></body></html>")
webbrowser.open(str(live_report_path))
else:
live_report_path = None
# Determine output directory (create before run_loop so logs can be written)
if args.results_dir:
timestamp = time.strftime("%Y-%m-%d_%H%M%S")
results_dir = Path(args.results_dir) / timestamp
results_dir.mkdir(parents=True, exist_ok=True)
else:
results_dir = None
log_dir = results_dir / "logs" if results_dir else None
output = run_loop(
eval_set=eval_set,
skill_path=skill_path,
description_override=args.description,
num_workers=args.num_workers,
timeout=args.timeout,
max_iterations=args.max_iterations,
runs_per_query=args.runs_per_query,
trigger_threshold=args.trigger_threshold,
holdout=args.holdout,
model=args.model,
verbose=args.verbose,
live_report_path=live_report_path,
log_dir=log_dir,
)
# Save JSON output
json_output = json.dumps(output, indent=2)
print(json_output)
if results_dir:
(results_dir / "results.json").write_text(json_output)
# Write final HTML report (without auto-refresh)
if live_report_path:
live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name))
print(f"\nReport: {live_report_path}", file=sys.stderr)
if results_dir and live_report_path:
(results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name))
if results_dir:
print(f"Results saved to: {results_dir}", file=sys.stderr)
if __name__ == "__main__":
main()
@@ -0,0 +1,47 @@
"""Shared utilities for skill-creator scripts."""
from pathlib import Path
def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
"""Parse a SKILL.md file, returning (name, description, full_content)."""
content = (skill_path / "SKILL.md").read_text()
lines = content.split("\n")
if lines[0].strip() != "---":
raise ValueError("SKILL.md missing frontmatter (no opening ---)")
end_idx = None
for i, line in enumerate(lines[1:], start=1):
if line.strip() == "---":
end_idx = i
break
if end_idx is None:
raise ValueError("SKILL.md missing frontmatter (no closing ---)")
name = ""
description = ""
frontmatter_lines = lines[1:end_idx]
i = 0
while i < len(frontmatter_lines):
line = frontmatter_lines[i]
if line.startswith("name:"):
name = line[len("name:"):].strip().strip('"').strip("'")
elif line.startswith("description:"):
value = line[len("description:"):].strip()
# Handle YAML multiline indicators (>, |, >-, |-)
if value in (">", "|", ">-", "|-"):
continuation_lines: list[str] = []
i += 1
while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")):
continuation_lines.append(frontmatter_lines[i].strip())
i += 1
description = " ".join(continuation_lines)
continue
else:
description = value.strip('"').strip("'")
i += 1
return name, description, content
+285
View File
@@ -0,0 +1,285 @@
---
name: zeroclaw
description: "Help users operate and interact with their ZeroClaw agent instance — through both the CLI (`zeroclaw` commands) and the REST/WebSocket gateway API. Use this skill whenever the user wants to: send messages to ZeroClaw, manage memory or cron jobs, check system status, configure channels or providers, hit the gateway API, troubleshoot their ZeroClaw setup, build from source, or do anything involving the `zeroclaw` binary or its HTTP endpoints. Trigger this even if the user just says things like 'check my agent status', 'schedule a reminder', 'store this in memory', 'list my cron jobs', 'send a message to my bot', 'set up Telegram', 'build zeroclaw', or 'my bot is broken' — these are all ZeroClaw operations."
---
# ZeroClaw Skill
You are helping a user operate their ZeroClaw agent instance. ZeroClaw is an autonomous agent runtime with a CLI and an HTTP/WebSocket gateway.
Your job is to understand what the user wants to accomplish and then **execute it** — run the command, make the API call, report the result. Do not just show commands for the user to copy-paste. Actually run them via the Bash tool and tell the user what happened. The only exception is destructive operations (clearing all memory, estop kill-all) where you should confirm first.
## Adaptive Expertise
Pay attention to how the user talks. Someone who says "can you hit the webhook endpoint with a POST" is telling you they know what they're doing — be concise, skip explanations, just execute. Someone who says "how do I make my bot remember things" needs more context about what's happening under the hood.
Signals of technical comfort: mentions specific endpoints, HTTP methods, JSON fields, talks about tokens/auth, uses CLI flags fluently, references config files directly.
Signals of less familiarity: asks "what does X do", uses casual language about the bot/agent, describes goals rather than mechanisms ("I want it to check something every morning").
Default to a middle ground — brief explanation of what you're about to do, then do it. Dial up or down from there based on cues.
## Discovery — Before You Act
Before running any ZeroClaw operation, make sure you know where things are:
1. **Find the binary.** Search in this order:
- `which zeroclaw` (PATH)
- The current project's build output: `./target/release/zeroclaw` or `./target/debug/zeroclaw` — this is the right choice when the user is working inside the ZeroClaw source tree and may have local changes
- Common install locations: `~/.cargo/bin/zeroclaw`, `~/Downloads/zeroclaw-bin/zeroclaw`
If no binary is found anywhere, offer to build from source (see "Building from Source" below). If the user is a developer working on ZeroClaw itself, they'll likely want the local build — watch for cues like them editing source files, mentioning PRs, or being in the project directory.
2. **Check if the gateway is running** (only needed for REST/WebSocket operations). A quick `curl -sf http://127.0.0.1:42617/health` tells you. If it's not running and the user wants REST access, let them know and offer to start it (`zeroclaw gateway` or `zeroclaw daemon`).
3. **Check auth status.** If the gateway requires pairing (`require_pairing = true` is the default), REST calls need a bearer token. Run `zeroclaw status` to see the current state, or check `~/.zeroclaw/config.toml` for a stored token under `[gateway]`.
Cache these findings for the conversation — don't re-discover every time.
## Important: REPL Limitation
`zeroclaw agent` (interactive REPL) requires interactive stdin, which doesn't work through the Bash tool. When the user wants to chat with their agent, use single-message mode instead:
```bash
zeroclaw agent -m "the message"
```
Each `-m` invocation is independent (no conversation history between calls). If the user needs multi-turn conversation, let them know they can run `zeroclaw agent` directly in their terminal, or use the WebSocket endpoint for programmatic streaming.
## First-Time Setup
If the user hasn't set up ZeroClaw yet (no `~/.zeroclaw/config.toml` exists), guide them through onboarding:
```bash
zeroclaw onboard # Quick mode — defaults to OpenRouter
zeroclaw onboard --provider anthropic # Use Anthropic directly
zeroclaw onboard --interactive # Step-by-step wizard
```
After onboarding, verify everything works:
```bash
zeroclaw status
zeroclaw doctor
```
If they already have a config but something is broken, `zeroclaw onboard --channels-only` repairs just the channel configuration without overwriting everything else.
## Building from Source
If the user wants to build ZeroClaw (or no binary is installed):
```bash
cargo build --release
```
This produces `target/release/zeroclaw`. For faster iteration during development, `cargo build` (debug mode) is quicker but produces a slower binary at `target/debug/zeroclaw`.
You can also run directly without a separate build step:
```bash
cargo run --release -- <subcommand> [args]
```
Before building, `cargo check` gives a quick compile validation without the full build.
## Choosing CLI vs REST
Both surfaces can do most things. Rules of thumb:
- **CLI is simpler** for one-off operations from the terminal. It handles auth internally and formats output nicely. Prefer CLI when the user is working locally.
- **REST is needed** when the user is building an integration, scripting from another language, or accessing a remote ZeroClaw instance. Also needed for streaming (WebSocket, SSE).
- If unclear, **default to CLI** — it's less setup.
## Core Operations
### Sending Messages
**CLI:** `zeroclaw agent -m "your message here"` — remember, always use `-m` mode, not bare `zeroclaw agent`.
**REST:**
```bash
curl -X POST http://127.0.0.1:42617/webhook \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"message": "your message here"}'
```
Response: `{"response": "...", "model": "..."}`
**WebSocket** (for streaming): connect to `ws://127.0.0.1:42617/ws/chat?token=<token>`, send `{"type": "message", "content": "..."}`, receive `{"type": "done", "full_response": "..."}`.
### System Status
Run `zeroclaw status` to see provider, model, uptime, channels, memory backend. For deeper diagnostics: `zeroclaw doctor`.
**REST:** `GET /api/status` (same info as JSON), `GET /health` (no auth, quick ok/not-ok).
### Memory
The CLI can list, get, and clear memories but **cannot store** them directly. To store a memory:
- Via agent: `zeroclaw agent -m "remember that my favorite color is blue"`
- Via REST: `POST /api/memory` with `{"key": "...", "content": "...", "category": "core"}`
**CLI (read/delete):**
- `zeroclaw memory list` — list all entries
- `zeroclaw memory list --category core --limit 10` — filtered
- `zeroclaw memory get "key-name"` — get specific entry
- `zeroclaw memory stats` — usage statistics
- `zeroclaw memory clear --key "prefix" --yes` — delete entries (confirm with user first)
**REST (full CRUD):**
- `GET /api/memory` — list all (optional: `?query=search+text&category=core`)
- `POST /api/memory` — store: `{"key": "...", "content": "...", "category": "core"}`
- `DELETE /api/memory/{key}` — delete entry
Categories: `core`, `daily`, `conversation`, or any custom string.
### Cron / Scheduling
**CLI:**
- `zeroclaw cron list` — show all jobs
- `zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York` — recurring
- `zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me'` — one-time at specific time
- `zeroclaw cron add-every 3600000 'Check health'` — interval in ms
- `zeroclaw cron once 30m 'Follow up'` — delay from now
- `zeroclaw cron pause <id>` / `zeroclaw cron resume <id>` / `zeroclaw cron remove <id>`
**REST:**
- `GET /api/cron` — list jobs
- `POST /api/cron` — add: `{"name": "...", "schedule": "0 9 * * *", "command": "..."}`
- `DELETE /api/cron/{id}` — remove job
### Tools
Tools are used automatically by the agent during conversations (shell, file ops, memory, browser, HTTP, web search, git, etc. — 30+ tools gated by security policy).
To see what's available: `GET /api/tools` (REST) lists all registered tools with descriptions and parameter schemas.
### Configuration
Edit `~/.zeroclaw/config.toml` directly, or re-run `zeroclaw onboard` to reconfigure.
**REST:**
- `GET /api/config` — get current config (secrets masked as `***MASKED***`)
- `PUT /api/config` — update config (send raw TOML as body, 1MB limit)
### Providers & Models
- `zeroclaw providers` — list all supported providers
- `zeroclaw models list` — cached model catalog
- `zeroclaw models refresh --all` — refresh from providers
- `zeroclaw models set anthropic/claude-sonnet-4-6` — set default model
Override per-message: `zeroclaw agent -p anthropic --model claude-sonnet-4-6 -m "hello"`
### Real-Time Events (SSE)
REST only — useful for building dashboards or monitoring:
```bash
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
```
Streams JSON events: `llm_request`, `tool_call_start`, `tool_call`, `agent_start`, `agent_end`, `error`.
### Cost Tracking
`GET /api/cost` — returns session/daily/monthly costs, token counts, per-model breakdown.
### Emergency Stop
Confirm with the user before running any estop command — these are disruptive.
- `zeroclaw estop --level kill-all` — stop everything
- `zeroclaw estop --level network-kill` — block all network
- `zeroclaw estop --level tool-freeze --tool shell` — freeze specific tool
- `zeroclaw estop status` — check current estop state
- `zeroclaw estop resume --network` — resume
### Gateway Lifecycle
- `zeroclaw gateway` — start HTTP gateway (foreground)
- `zeroclaw gateway -p 8080 --host 127.0.0.1` — custom bind
- `zeroclaw daemon` — start gateway + channels + scheduler + heartbeat
- `zeroclaw service install/start/stop/status/uninstall` — OS service management
### Channels
ZeroClaw supports 21 messaging channels. To add one, you need to edit `~/.zeroclaw/config.toml`. For example, to set up Telegram:
```toml
[channels]
telegram = true
[channels_config.telegram]
bot_token = "your-bot-token-from-botfather"
allowed_users = [123456789]
```
Then restart the daemon. Check channel health with `zeroclaw channels doctor`.
For the full list of channels and their config fields, read `references/cli-reference.md` (Channels section).
### Pairing (Authentication Setup)
When `require_pairing = true` (default), REST clients need a bearer token:
```bash
curl -X POST http://127.0.0.1:42617/pair -H "X-Pairing-Code: <code>"
```
Response includes `{"token": "..."}` — save this for subsequent requests.
## Common Workflows
Here are multi-step sequences you're likely to need:
**"Is my agent healthy?"**
1. Run `zeroclaw status` — check provider, model, channels
2. Run `zeroclaw doctor` — check connectivity, diagnose issues
3. If gateway needed: `curl -sf http://127.0.0.1:42617/health`
**"Set up a new channel"**
1. Read the current config: `cat ~/.zeroclaw/config.toml`
2. Add the channel config (edit the TOML)
3. Restart: `zeroclaw service restart` (or restart daemon manually)
4. Verify: `zeroclaw channels doctor`
**"Switch to a different model"**
1. Check available: `zeroclaw models list`
2. Set it: `zeroclaw models set <provider/model>`
3. Verify: `zeroclaw status`
4. Test: `zeroclaw agent -m "hello, what model are you?"`
## Gateway Defaults
- **Port:** 42617
- **Host:** 127.0.0.1
- **Auth:** Pairing required (bearer token)
- **Rate limits:** 60 webhook requests/min, 10 pairing attempts/min
- **Body limit:** 64KB (1MB for config updates)
- **Timeout:** 30 seconds
- **Idempotency:** Optional `X-Idempotency-Key` header on `/webhook` (300s TTL)
- **Config location:** `~/.zeroclaw/config.toml`
## Reference Files
For the complete API specification with every endpoint, field, and edge case, read `references/rest-api.md`.
For the full CLI command tree with all flags and options, read `references/cli-reference.md`.
Only load these when you need precise details beyond what's in this file — for most operations, the quick references above are sufficient.
## Troubleshooting
**"zeroclaw: command not found"** — Binary not in PATH. Check `./target/release/zeroclaw`, `~/.cargo/bin/zeroclaw`, or build from source with `cargo build --release`.
**"Connection refused" on REST calls** — Gateway isn't running. Start it with `zeroclaw gateway` or `zeroclaw daemon`.
**"Unauthorized" (401/403)** — Bearer token is missing or invalid. Re-pair via `POST /pair` with the pairing code, or check `~/.zeroclaw/config.toml` for the stored token.
**"LLM request failed" (500)** — Provider issue. Run `zeroclaw doctor` to check connectivity. Common causes: expired API key, provider outage, rate limiting on the provider side.
**"Too many requests" (429)** — You're hitting ZeroClaw's rate limit. Back off — the response includes `retry_after` with the number of seconds to wait.
**Agent not using tools / acting limited** — Check autonomy settings in config.toml under `[autonomy]`. `level = "read_only"` disables most tools. Try `level = "supervised"` or `level = "full"`.
**Memory not persisting** — Check `[memory]` config. If `backend = "none"`, nothing is stored. Switch to `"sqlite"` or `"markdown"`. Also verify `auto_save = true`.
**Channel not responding** — Run `zeroclaw channels doctor` for the specific channel. Common issues: expired bot token, wrong allowed_users list, channel not enabled in `[channels]`.
Report errors to the user with context appropriate to their expertise level. For beginners, explain what went wrong and suggest the fix. For experts, just show the error and the fix.
+23
View File
@@ -0,0 +1,23 @@
{
"skill_name": "zeroclaw",
"evals": [
{
"id": 0,
"prompt": "how do i make my bot remember my name",
"expected_output": "Executes a zeroclaw command to store a memory, explains what happened in beginner-friendly language",
"files": []
},
{
"id": 1,
"prompt": "I want to schedule a daily health check on my ZeroClaw instance every morning at 9am ET",
"expected_output": "Executes zeroclaw cron add with correct cron expression and timezone flag",
"files": []
},
{
"id": 2,
"prompt": "Set up a Python script that monitors my ZeroClaw agent's activity via SSE and logs tool calls to a file",
"expected_output": "Writes a Python script that connects to /api/events SSE endpoint with auth, filters for tool_call events, and logs to a file",
"files": []
}
]
}
@@ -0,0 +1,277 @@
# ZeroClaw CLI Reference
Complete command reference for the `zeroclaw` binary.
## Table of Contents
1. [Agent](#agent)
2. [Onboarding](#onboarding)
3. [Status & Diagnostics](#status--diagnostics)
4. [Memory](#memory)
5. [Cron](#cron)
6. [Providers & Models](#providers--models)
7. [Gateway & Daemon](#gateway--daemon)
8. [Service Management](#service-management)
9. [Channels](#channels)
10. [Security & Emergency Stop](#security--emergency-stop)
11. [Hardware Peripherals](#hardware-peripherals)
12. [Skills](#skills)
13. [Shell Completions](#shell-completions)
---
## Agent
Interactive chat or single-message mode.
```bash
zeroclaw agent # Interactive REPL
zeroclaw agent -m "Summarize today's logs" # Single message
zeroclaw agent -p anthropic --model claude-sonnet-4-6 # Override provider/model
zeroclaw agent -t 0.3 # Set temperature
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0 # Attach hardware
```
**Key flags:**
- `-m <message>` — single message mode (no REPL)
- `-p <provider>` — override provider (openrouter, anthropic, openai, ollama)
- `--model <model>` — override model
- `-t <float>` — temperature (0.02.0)
- `--peripheral <name>:<port>` — attach hardware peripheral
The agent has access to 30+ tools gated by security policy: shell, file_read, file_write, file_edit, glob_search, content_search, memory_store, memory_recall, memory_forget, browser, http_request, web_fetch, web_search, cron, delegate, git, and more. Max tool iterations defaults to 10.
---
## Onboarding
First-time setup or reconfiguration.
```bash
zeroclaw onboard # Quick mode (default: openrouter)
zeroclaw onboard --provider anthropic # Quick mode with specific provider
zeroclaw onboard --interactive # Interactive wizard
zeroclaw onboard --memory sqlite # Set memory backend
zeroclaw onboard --force # Overwrite existing config
zeroclaw onboard --channels-only # Repair channels only
```
**Key flags:**
- `--provider <name>` — openrouter (default), anthropic, openai, ollama
- `--model <model>` — default model
- `--memory <backend>` — sqlite, markdown, lucid, none
- `--force` — overwrite existing config.toml
- `--channels-only` — only repair channel configuration
- `--interactive` — step-by-step wizard
Creates `~/.zeroclaw/config.toml` with `0600` permissions.
---
## Status & Diagnostics
```bash
zeroclaw status # System overview
zeroclaw doctor # Run all diagnostic checks
zeroclaw doctor models # Probe model connectivity
zeroclaw doctor traces # Query execution traces
```
---
## Memory
```bash
zeroclaw memory list # List all entries
zeroclaw memory list --category core --limit 10 # Filtered list
zeroclaw memory get "some-key" # Get specific entry
zeroclaw memory stats # Usage statistics
zeroclaw memory clear --key "prefix" --yes # Delete entries (requires --yes)
```
**Key flags:**
- `--category <name>` — filter by category (core, daily, conversation, custom)
- `--limit <n>` — limit results
- `--key <prefix>` — key prefix for clear operations
- `--yes` — skip confirmation (required for clear)
---
## Cron
```bash
zeroclaw cron list # List all jobs
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York # Recurring (cron expr)
zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me about meeting' # One-time at specific time
zeroclaw cron add-every 3600000 'Check server health' # Interval in milliseconds
zeroclaw cron once 30m 'Follow up on that task' # Delay from now
zeroclaw cron pause <id> # Pause job
zeroclaw cron resume <id> # Resume job
zeroclaw cron remove <id> # Delete job
```
**Subcommands:**
- `add <cron-expr> <command>` — standard cron expression (5-field)
- `add-at <iso-datetime> <command>` — fire once at exact time
- `add-every <ms> <command>` — repeating interval
- `once <duration> <command>` — delay from now (e.g., `30m`, `2h`, `1d`)
---
## Providers & Models
```bash
zeroclaw providers # List all 40+ supported providers
zeroclaw models list # Show cached model catalog
zeroclaw models refresh --all # Refresh catalogs from all providers
zeroclaw models set anthropic/claude-sonnet-4-6 # Set default model
zeroclaw models status # Current model info
```
Model routing in config.toml:
```toml
[[model_routes]]
hint = "reasoning"
provider = "openrouter"
model = "anthropic/claude-sonnet-4-6"
```
---
## Gateway & Daemon
```bash
zeroclaw gateway # Start HTTP gateway (foreground)
zeroclaw gateway -p 8080 --host 127.0.0.1 # Custom port/host
zeroclaw daemon # Gateway + channels + scheduler + heartbeat
zeroclaw daemon -p 8080 --host 0.0.0.0 # Custom bind
```
**Gateway defaults:**
- Port: 42617
- Host: 127.0.0.1
- Pairing required: true
- Public bind allowed: false
---
## Service Management
OS service lifecycle (systemd on Linux, launchd on macOS).
```bash
zeroclaw service install # Install as system service
zeroclaw service start # Start the service
zeroclaw service status # Check service status
zeroclaw service stop # Stop the service
zeroclaw service restart # Restart the service
zeroclaw service uninstall # Remove the service
```
**Logs:**
- macOS: `~/.zeroclaw/logs/daemon.stdout.log`
- Linux: `journalctl -u zeroclaw`
---
## Channels
Channels are configured in `config.toml` under `[channels]` and `[channels_config.*]`.
```bash
zeroclaw channels list # List configured channels
zeroclaw channels doctor # Check channel health
```
Supported channels (21 total): Telegram, Discord, Slack, WhatsApp (Meta), WATI, Linq (iMessage/RCS/SMS), Email (IMAP/SMTP), IRC, Matrix, Nostr, Signal, Nextcloud Talk, and more.
Channel config example (Telegram):
```toml
[channels]
telegram = true
[channels_config.telegram]
bot_token = "..."
allowed_users = [123456789]
```
---
## Security & Emergency Stop
```bash
zeroclaw estop --level kill-all # Stop everything
zeroclaw estop --level network-kill # Block all network access
zeroclaw estop --level domain-block --domain "*.example.com" # Block specific domains
zeroclaw estop --level tool-freeze --tool shell # Freeze specific tool
zeroclaw estop status # Check estop state
zeroclaw estop resume --network # Resume (may require OTP)
```
**Estop levels:**
- `kill-all` — nuclear option, stops all agent activity
- `network-kill` — blocks all outbound network
- `domain-block` — blocks specific domain patterns
- `tool-freeze` — freezes individual tools
Autonomy config in config.toml:
```toml
[autonomy]
level = "supervised" # read_only | supervised | full
workspace_only = true
allowed_commands = ["git", "cargo", "python"]
forbidden_paths = ["/etc", "/root", "~/.ssh"]
max_actions_per_hour = 20
max_cost_per_day_cents = 500
```
---
## Hardware Peripherals
```bash
zeroclaw hardware discover # Find USB devices
zeroclaw hardware introspect /dev/ttyACM0 # Probe device capabilities
zeroclaw peripheral list # List configured peripherals
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0 # Add peripheral
zeroclaw peripheral flash-nucleo # Flash STM32 firmware
zeroclaw peripheral flash --port /dev/cu.usbmodem101 # Flash Arduino firmware
```
**Supported boards:** STM32 Nucleo-F401RE, Arduino Uno R4, Raspberry Pi GPIO, ESP32.
Attach to agent session: `zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0`
---
## Skills
```bash
zeroclaw skills list # List installed skills
zeroclaw skills install <path-or-url> # Install a skill
zeroclaw skills audit # Audit installed skills
zeroclaw skills remove <name> # Remove a skill
```
---
## Shell Completions
```bash
zeroclaw completions zsh # Generate Zsh completions
zeroclaw completions bash # Generate Bash completions
zeroclaw completions fish # Generate Fish completions
```
---
## Config File
Default location: `~/.zeroclaw/config.toml`
Config resolution order (first match wins):
1. `ZEROCLAW_CONFIG_DIR` environment variable
2. `ZEROCLAW_WORKSPACE` environment variable
3. `~/.zeroclaw/active_workspace.toml` marker file
4. `~/.zeroclaw/config.toml` (default)
@@ -0,0 +1,505 @@
# ZeroClaw REST API Reference
Complete endpoint reference for the ZeroClaw gateway HTTP API.
## Table of Contents
1. [Authentication](#authentication)
2. [Public Endpoints](#public-endpoints)
3. [Webhook](#webhook)
4. [WebSocket Chat](#websocket-chat)
5. [Status & Health](#status--health)
6. [Memory](#memory)
7. [Cron](#cron)
8. [Tools](#tools)
9. [Configuration](#configuration)
10. [Integrations](#integrations)
11. [Cost](#cost)
12. [Events (SSE)](#events-sse)
13. [Channel Webhooks](#channel-webhooks)
14. [Rate Limiting](#rate-limiting)
15. [Error Responses](#error-responses)
---
## Authentication
Three authentication mechanisms:
### Bearer Token (Primary)
```
Authorization: Bearer <token>
```
Obtained via `POST /pair`. Required for all `/api/*` endpoints when `require_pairing = true` (default).
### Webhook Secret
```
X-Webhook-Secret: <raw_secret>
```
Optional additional auth for `/webhook`. Server SHA-256 hashes and compares using constant-time comparison.
### WebSocket Token
```
ws://host:port/ws/chat?token=<bearer_token>
```
WebSocket connections pass the token as a query parameter (browsers can't set custom headers on WS handshake).
---
## Public Endpoints
### GET /health
No authentication required.
**Response 200:**
```json
{
"status": "ok",
"paired": true,
"require_pairing": true,
"runtime": {}
}
```
### GET /metrics
Prometheus text exposition format.
**Response 200:**
```
Content-Type: text/plain; version=0.0.4; charset=utf-8
```
### POST /pair
Exchange a one-time pairing code for a bearer token.
**Rate Limit:** Configurable per-minute limit per IP (default: 10/min).
**Headers:**
- `X-Pairing-Code: <code>` (required)
**Response 200 (success):**
```json
{
"paired": true,
"persisted": true,
"token": "<bearer_token>",
"message": "Save this token — use it as Authorization: Bearer <token>"
}
```
**Response 200 (persistence failure):**
```json
{
"paired": true,
"persisted": false,
"token": "<bearer_token>",
"message": "Paired for this process, but failed to persist token to config.toml..."
}
```
**Response 403:**
```json
{"error": "Invalid pairing code"}
```
**Response 429:**
```json
{"error": "Too many pairing requests. Please retry later.", "retry_after": 60}
```
**Response 429 (lockout):**
```json
{"error": "Too many failed attempts. Try again in {lockout_secs}s.", "retry_after": 120}
```
---
## Webhook
### POST /webhook
Send a message to the agent and receive a response.
**Rate Limit:** Configurable per-minute limit per IP (default: 60/min).
**Headers:**
- `Authorization: Bearer <token>` (if pairing enabled)
- `Content-Type: application/json`
- `X-Webhook-Secret: <secret>` (optional)
- `X-Idempotency-Key: <uuid>` (optional)
**Request Body:**
```json
{"message": "your prompt here"}
```
**Response 200:**
```json
{"response": "<llm_response>", "model": "<model_name>"}
```
**Response 200 (duplicate — idempotency key match):**
```json
{"status": "duplicate", "idempotent": true, "message": "Request already processed for this idempotency key"}
```
**Response 401:**
```json
{"error": "Unauthorized — pair first via POST /pair, then send Authorization: Bearer <token>"}
```
**Response 429:**
```json
{"error": "Too many webhook requests. Please retry later.", "retry_after": 60}
```
**Response 500:**
```json
{"error": "LLM request failed"}
```
### Idempotency
- Header: `X-Idempotency-Key: <uuid>`
- TTL: configurable, default 300 seconds
- Max tracked keys: configurable, default 10,000
- Duplicate requests within TTL return `"status": "duplicate"` instead of re-processing
---
## WebSocket Chat
### GET /ws/chat?token=<bearer_token>
Streaming agent chat over WebSocket.
**Client → Server:**
```json
{"type": "message", "content": "Hello, what's the weather?"}
```
**Server → Client (complete response):**
```json
{"type": "done", "full_response": "The weather in San Francisco is sunny..."}
```
**Server → Client (error):**
```json
{"type": "error", "message": "Error message here"}
```
Ignore unknown message types. Invalid JSON triggers an error response.
---
## Status & Health
### GET /api/status
**Response 200:**
```json
{
"provider": "openrouter",
"model": "anthropic/claude-sonnet-4",
"temperature": 0.7,
"uptime_seconds": 3600,
"gateway_port": 42617,
"locale": "en",
"memory_backend": "sqlite",
"paired": true,
"channels": {
"telegram": false,
"discord": true,
"slack": false
},
"health": {}
}
```
### GET /api/health
Component health snapshot (requires auth).
```json
{"health": {}}
```
### GET or POST /api/doctor
Run system diagnostics.
```json
{
"results": [
{"name": "provider_connectivity", "severity": "ok", "message": "OpenRouter API reachable"}
],
"summary": {"ok": 5, "warnings": 1, "errors": 0}
}
```
---
## Memory
### GET /api/memory
List or search memory entries.
**Query Parameters:**
- `query` (string, optional) — search text; triggers search mode
- `category` (string, optional) — filter by category
**Response 200:**
```json
{
"entries": [
{
"key": "memory_key",
"content": "memory content",
"category": "core",
"timestamp": "2025-01-10T12:00:00Z"
}
]
}
```
### POST /api/memory
Store a memory entry.
**Request Body:**
```json
{
"key": "unique_key",
"content": "memory content",
"category": "core"
}
```
Category defaults to `"core"` if omitted. Other values: `daily`, `conversation`, or any custom string.
**Response 200:**
```json
{"status": "ok"}
```
### DELETE /api/memory/{key}
Delete a memory entry.
**Response 200:**
```json
{"status": "ok", "deleted": true}
```
---
## Cron
### GET /api/cron
List all scheduled jobs.
**Response 200:**
```json
{
"jobs": [
{
"id": "<uuid>",
"name": "daily-backup",
"command": "backup.sh",
"next_run": "2025-01-10T15:00:00Z",
"last_run": "2025-01-09T15:00:00Z",
"last_status": "success",
"enabled": true
}
]
}
```
### POST /api/cron
Add a new job.
**Request Body:**
```json
{
"name": "job-name",
"schedule": "0 9 * * *",
"command": "command to run"
}
```
**Response 200:**
```json
{
"status": "ok",
"job": {"id": "<uuid>", "name": "job-name", "command": "command to run", "enabled": true}
}
```
### DELETE /api/cron/{id}
Remove a job.
**Response 200:**
```json
{"status": "ok"}
```
---
## Tools
### GET /api/tools
List all registered tools with descriptions and parameter schemas.
**Response 200:**
```json
{
"tools": [
{"name": "shell", "description": "Execute shell commands", "parameters": {}},
{"name": "file_read", "description": "Read file contents", "parameters": {}}
]
}
```
---
## Configuration
### GET /api/config
Get current config. Secrets are masked as `***MASKED***`.
**Response 200:**
```json
{"format": "toml", "content": "<toml_string>"}
```
### PUT /api/config
Update config from TOML body. Body limit: 1 MB.
**Request Body:** Raw TOML text.
**Response 200:**
```json
{"status": "ok"}
```
**Response 400:**
```json
{"error": "Invalid TOML: <details>"}
```
or
```json
{"error": "Invalid config: <validation_error>"}
```
---
## Integrations
### GET /api/integrations
List all integrations and their status.
**Response 200:**
```json
{
"integrations": [
{"name": "openrouter", "description": "OpenRouter LLM provider", "category": "providers", "status": "ok"},
{"name": "telegram", "description": "Telegram messaging channel", "category": "channels", "status": "configured"}
]
}
```
---
## Cost
### GET /api/cost
Cost tracking summary.
**Response 200:**
```json
{
"cost": {
"session_cost_usd": 1.50,
"daily_cost_usd": 5.00,
"monthly_cost_usd": 150.00,
"total_tokens": 50000,
"request_count": 25,
"by_model": {"anthropic/claude-sonnet-4": 1.50}
}
}
```
---
## Events (SSE)
### GET /api/events
Server-Sent Events stream. Requires bearer token.
**Content-Type:** `text/event-stream`
**Event types:**
| Type | Fields | Description |
|------|--------|-------------|
| `llm_request` | provider, model, timestamp | LLM call started |
| `tool_call_start` | tool, timestamp | Tool execution started |
| `tool_call` | tool, duration_ms, success, timestamp | Tool execution completed |
| `agent_start` | provider, model, timestamp | Agent loop started |
| `agent_end` | provider, model, duration_ms, tokens_used, cost_usd, timestamp | Agent loop completed |
| `error` | component, message, timestamp | Error occurred |
**Example:**
```bash
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
```
---
## Channel Webhooks
These are incoming webhook endpoints for specific messaging channels. They're set up automatically when channels are configured.
### WhatsApp (Meta Cloud API)
- `GET /whatsapp` — verification (echoes `hub.challenge`)
- `POST /whatsapp` — incoming messages (signature verified via `X-Hub-Signature-256`)
### WATI (WhatsApp Business)
- `GET /wati` — verification (echoes `challenge`)
- `POST /wati` — incoming messages
### Linq (iMessage/RCS/SMS)
- `POST /linq` — incoming messages (signature verified via `X-Webhook-Signature` + `X-Webhook-Timestamp`)
### Nextcloud Talk
- `POST /nextcloud-talk` — bot API webhook (signature verified via `X-Nextcloud-Talk-Signature`)
---
## Rate Limiting
Sliding window (60-second window), per client IP.
| Endpoint | Default Limit |
|----------|--------------|
| `POST /pair` | 10/min |
| `POST /webhook` | 60/min |
If `trust_forwarded_headers` is enabled, uses `X-Forwarded-For` for client IP.
Max tracked keys: configurable (default: 10,000).
---
## Error Responses
**Standard format:**
```json
{"error": "Human-readable error message"}
```
**With retry info:**
```json
{"error": "...", "retry_after": 60}
```
**Status codes:**
| Code | Meaning |
|------|---------|
| 200 | Success |
| 400 | Invalid JSON, missing fields, invalid TOML |
| 401 | Invalid/missing bearer token or webhook secret |
| 403 | Pairing verification failed |
| 404 | Endpoint or channel not configured |
| 408 | Request timeout (30s) |
| 429 | Rate limited (check `retry_after`) |
| 500 | LLM error, database error, internal failure |
+3 -7
View File
@@ -20,16 +20,12 @@ reviews:
enabled: true
# Only review PRs targeting these branches
base_branches:
- main
- develop
- master
# Skip reviews for draft PRs or WIP
drafts: false
# Enable base branch analysis
base_branch_analysis: true
# Poem configuration
poem:
enabled: false
# Poem feature toggle (must be a boolean, not an object)
poem: false
# Reviewer suggestions
reviewer:
-22
View File
@@ -1,25 +1,3 @@
# EditorConfig — https://editorconfig.org
# Provides consistent formatting defaults across editors and platforms.
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
indent_style = space
indent_size = 4
[*.md]
# Trailing whitespace is significant in Markdown (line breaks).
trim_trailing_whitespace = false
[*.{yml,yaml}]
indent_size = 2
[*.toml]
indent_size = 2
[Dockerfile]
indent_size = 4
+1
View File
@@ -59,6 +59,7 @@ PROVIDER=openrouter
# ZAI_API_KEY=...
# SYNTHETIC_API_KEY=...
# OPENCODE_API_KEY=...
# OPENCODE_GO_API_KEY=...
# VERCEL_API_KEY=...
# CLOUDFLARE_API_KEY=...
-32
View File
@@ -1,33 +1 @@
# Normalize all text files
* text=auto
# Force LF for scripts and build-critical files
*.sh text eol=lf
Dockerfile* text eol=lf
*.rs text eol=lf
*.toml text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
# CI
.github/**/* text eol=lf
# Images
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
# Archives
*.zip binary
*.tar binary
*.tgz binary
*.gz binary
*.7z binary
# Compiled artifacts
*.so binary
*.dll binary
*.exe binary
*.a binary
+25 -25
View File
@@ -1,32 +1,32 @@
# Default owner for all files
* @chumyin
* @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Important functional modules
/src/agent/** @theonlyhennygod
/src/providers/** @theonlyhennygod
/src/channels/** @theonlyhennygod
/src/tools/** @theonlyhennygod
/src/gateway/** @theonlyhennygod
/src/runtime/** @theonlyhennygod
/src/memory/** @theonlyhennygod
/Cargo.toml @theonlyhennygod
/Cargo.lock @theonlyhennygod
/src/agent/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/providers/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/channels/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/tools/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/gateway/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/runtime/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/src/memory/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/Cargo.toml @theonlyhennygod @JordanTheJet @SimianAstronaut7
/Cargo.lock @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Security / tests / CI-CD ownership
/src/security/** @chumyin
/tests/** @chumyin
/.github/** @chumyin
/.github/workflows/** @chumyin
/.github/codeql/** @chumyin
/.github/dependabot.yml @chumyin
/SECURITY.md @chumyin
/docs/actions-source-policy.md @chumyin
/docs/ci-map.md @chumyin
/src/security/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/tests/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/workflows/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/codeql/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/.github/dependabot.yml @theonlyhennygod @JordanTheJet @SimianAstronaut7
/SECURITY.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/actions-source-policy.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/ci-map.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
# Docs & governance
/docs/** @chumyin
/AGENTS.md @chumyin
/CLAUDE.md @chumyin
/CONTRIBUTING.md @chumyin
/docs/pr-workflow.md @chumyin
/docs/reviewer-playbook.md @chumyin
/docs/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
/AGENTS.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/CLAUDE.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/CONTRIBUTING.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/pr-workflow.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
/docs/reviewer-playbook.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
+6 -16
View File
@@ -11,15 +11,6 @@ body:
Please provide a minimal reproducible case so maintainers can triage quickly.
Do not include personal/sensitive data; redact and anonymize all logs/payloads.
- type: input
id: summary
attributes:
label: Summary
description: One-line description of the problem.
placeholder: zeroclaw daemon exits immediately when ...
validations:
required: true
- type: dropdown
id: component
attributes:
@@ -83,13 +74,13 @@ body:
id: impact
attributes:
label: Impact
description: Who is affected, how often, and practical consequences.
description: Who is affected, how often, and practical consequences (optional but helps triage).
placeholder: |
Affected users: ...
Frequency: always/intermittent
Consequence: ...
validations:
required: true
required: false
- type: textarea
id: logs
@@ -112,9 +103,10 @@ body:
id: rust
attributes:
label: Rust version
description: Required for runtime/build bugs; optional for docs/config issues.
placeholder: rustc 1.xx.x
validations:
required: true
required: false
- type: input
id: os
@@ -140,9 +132,7 @@ body:
attributes:
label: Pre-flight checks
options:
- label: I reproduced this on the latest main branch or latest release.
- label: I reproduced this on the latest master branch or latest release.
required: true
- label: I redacted secrets/tokens from logs.
required: true
- label: I removed personal identifiers and replaced identity-specific data with neutral placeholders.
- label: I redacted secrets, tokens, and personal data from all submitted content.
required: true
+2 -2
View File
@@ -4,8 +4,8 @@ contact_links:
url: https://github.com/zeroclaw-labs/zeroclaw/security/policy
about: Please report security vulnerabilities privately via SECURITY.md policy.
- name: Contribution guide
url: https://github.com/zeroclaw-labs/zeroclaw/blob/main/CONTRIBUTING.md
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/CONTRIBUTING.md
about: Please read contribution and PR requirements before opening an issue.
- name: PR workflow & reviewer expectations
url: https://github.com/zeroclaw-labs/zeroclaw/blob/main/docs/pr-workflow.md
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/docs/pr-workflow.md
about: Read risk-based PR tracks, CI gates, and merge criteria before filing feature requests.
+8 -8
View File
@@ -42,10 +42,10 @@ body:
id: non_goals
attributes:
label: Non-goals / out of scope
description: Clarify what should not be included in the first iteration.
description: Clarify what should not be included in the first iteration (optional but helps scope discussion).
placeholder: No UI changes, no cross-provider dynamic adaptation in v1.
validations:
required: true
required: false
- type: textarea
id: alternatives
@@ -60,31 +60,31 @@ body:
id: acceptance
attributes:
label: Acceptance criteria
description: What outcomes would make this request complete?
description: What outcomes would make this request complete? (optional — can be defined during triage)
placeholder: |
- Config key is documented and validated
- Runtime path uses configured retry budget
- Regression tests cover fallback and invalid config
validations:
required: true
required: false
- type: textarea
id: architecture
attributes:
label: Architecture impact
description: Which subsystem(s) are affected?
description: Which subsystem(s) are affected? (optional — maintainers will assess during triage)
placeholder: providers/, channels/, memory/, runtime/, security/, docs/ ...
validations:
required: true
required: false
- type: textarea
id: risk
attributes:
label: Risk and rollback
description: Main risk + how to disable/revert quickly.
description: Main risk + how to disable/revert quickly (optional — can be defined during planning).
placeholder: Risk is ... rollback is ...
validations:
required: true
required: false
- type: dropdown
id: breaking
+3 -3
View File
@@ -5,7 +5,7 @@ updates:
directory: "/"
schedule:
interval: daily
target-branch: dev
target-branch: master
open-pull-requests-limit: 3
labels:
- "dependencies"
@@ -21,7 +21,7 @@ updates:
directory: "/"
schedule:
interval: daily
target-branch: dev
target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
@@ -38,7 +38,7 @@ updates:
directory: "/"
schedule:
interval: daily
target-branch: dev
target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
+1 -1
View File
@@ -2,7 +2,7 @@
Describe this PR in 2-5 bullets:
- Base branch target (`dev` for normal contributions; `main` only for `dev` promotion):
- Base branch target (`master` for all contributions):
- Problem:
- Why it matters:
- What changed:
+2 -15
View File
@@ -10,21 +10,8 @@ Subdirectories are not valid locations for workflow entry files.
Repository convention:
1. Keep runnable workflow entry files at `.github/workflows/` root.
2. Keep workflow-only helper scripts under `.github/workflows/scripts/`.
3. Keep cross-tooling/local CI scripts under `scripts/ci/` when they are used outside Actions.
2. Keep cross-tooling/local CI scripts under `dev/` or `scripts/ci/` when used outside Actions.
Workflow behavior documentation in this directory:
- `.github/workflows/main-branch-flow.md`
Current workflow helper scripts:
- `.github/workflows/scripts/ci_workflow_owner_approval.js`
- `.github/workflows/scripts/ci_license_file_owner_guard.js`
- `.github/workflows/scripts/lint_feedback.js`
- `.github/workflows/scripts/pr_auto_response_contributor_tier.js`
- `.github/workflows/scripts/pr_auto_response_labeled_routes.js`
- `.github/workflows/scripts/pr_check_status_nudge.js`
- `.github/workflows/scripts/pr_intake_checks.js`
- `.github/workflows/scripts/pr_labeler.js`
- `.github/workflows/scripts/test_benchmarks_pr_comment.js`
- `.github/workflows/master-branch-flow.md`
+175
View File
@@ -0,0 +1,175 @@
name: Quality Gate
on:
pull_request:
branches: [master]
concurrency:
group: checks-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check formatting
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --all-targets -- -D warnings
test:
name: Test
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Install mold linker
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Install cargo-nextest
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
- name: Run tests
run: cargo nextest run --locked
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
- os: macos-14
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- name: Install mold linker
if: runner.os == 'Linux'
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Ensure web/dist placeholder exists
shell: bash
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Build release
shell: bash
run: cargo build --profile ci --locked --target ${{ matrix.target }}
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
security:
name: Security Audit
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install cargo-audit
run: cargo install cargo-audit --locked
- name: Install cargo-deny
run: cargo install cargo-deny --locked
- name: Audit dependencies
run: cargo audit
- name: Check licenses and sources
run: cargo deny check licenses sources
check-32bit:
name: "Check (32-bit)"
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: i686-unknown-linux-gnu
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install 32-bit libs
run: sudo apt-get update && sudo apt-get install -y gcc-multilib
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Cargo check (32-bit, no default features)
run: cargo check --target i686-unknown-linux-gnu --no-default-features
# Composite status check — branch protection only needs to require this
# single job instead of tracking every matrix leg individually.
gate:
name: CI Required Gate
if: always()
needs: [lint, test, build, security, check-32bit]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
run: |
if [[ "${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}" == "true" ]]; then
echo "::error::One or more upstream jobs failed or were cancelled"
exit 1
fi
security-gate:
name: Security Required Gate
if: always()
needs: [security]
runs-on: ubuntu-latest
steps:
- name: Check security job result
run: |
if [[ "${{ needs.security.result }}" != "success" ]]; then
echo "::error::Security audit failed or was cancelled"
exit 1
fi
-61
View File
@@ -1,61 +0,0 @@
name: CI Build (Fast)
# Optional fast release build that runs alongside the normal Build (Smoke) job.
# This workflow is informational and does not gate merges.
on:
push:
branches: [dev, main]
pull_request:
branches: [dev, main]
concurrency:
group: ci-fast-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
jobs:
changes:
name: Detect Change Scope
runs-on: blacksmith-2vcpu-ubuntu-2404
outputs:
rust_changed: ${{ steps.scope.outputs.rust_changed }}
docs_only: ${{ steps.scope.outputs.docs_only }}
workflow_changed: ${{ steps.scope.outputs.workflow_changed }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Detect docs-only changes
id: scope
shell: bash
env:
EVENT_NAME: ${{ github.event_name }}
BASE_SHA: ${{ github.event_name == 'pull_request' && github.event.pull_request.base.sha || github.event.before }}
run: ./scripts/ci/detect_change_scope.sh
build-fast:
name: Build (Fast)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true' || needs.changes.outputs.workflow_changed == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
with:
prefix-key: fast-build
cache-targets: true
- name: Build release binary
run: cargo build --release --locked --verbose
+142 -312
View File
@@ -1,340 +1,170 @@
name: CI Run
name: CI
on:
push:
branches: [dev, main]
pull_request:
branches: [dev, main]
push:
branches: [master]
pull_request:
branches: [master]
concurrency:
group: ci-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
group: ci-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
changes:
name: Detect Change Scope
runs-on: blacksmith-2vcpu-ubuntu-2404
outputs:
docs_only: ${{ steps.scope.outputs.docs_only }}
docs_changed: ${{ steps.scope.outputs.docs_changed }}
rust_changed: ${{ steps.scope.outputs.rust_changed }}
workflow_changed: ${{ steps.scope.outputs.workflow_changed }}
docs_files: ${{ steps.scope.outputs.docs_files }}
base_sha: ${{ steps.scope.outputs.base_sha }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
lint:
name: Lint
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Detect docs-only changes
id: scope
shell: bash
env:
EVENT_NAME: ${{ github.event_name }}
BASE_SHA: ${{ github.event_name == 'pull_request' && github.event.pull_request.base.sha || github.event.before }}
run: ./scripts/ci/detect_change_scope.sh
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
lint:
name: Lint Gate (Format + Clippy + Strict Delta)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run rust quality gate
run: ./scripts/ci/rust_quality_gate.sh
- name: Run strict lint delta gate
env:
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
run: ./scripts/ci/rust_strict_delta_gate.sh
- name: Check formatting
run: cargo fmt --all -- --check
test:
name: Test
needs: [changes, lint]
if: needs.changes.outputs.rust_changed == 'true' && needs.lint.result == 'success'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run tests
run: cargo test --locked --verbose
- name: Clippy
run: cargo clippy --all-targets -- -D warnings
build:
name: Build (Smoke)
needs: [changes]
if: needs.changes.outputs.rust_changed == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 20
lint-strict-delta:
name: Strict Delta Lint
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Build binary (smoke check)
run: cargo build --profile release-fast --locked --verbose
- name: Check binary size
run: bash scripts/ci/check_binary_size.sh target/release-fast/zeroclaw
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
docs-only:
name: Docs-Only Fast Path
needs: [changes]
if: needs.changes.outputs.docs_only == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Skip heavy jobs for docs-only change
run: echo "Docs-only change detected. Rust lint/test/build skipped."
- name: Run strict delta lint gate
run: bash scripts/ci/rust_strict_delta_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
non-rust:
name: Non-Rust Fast Path
needs: [changes]
if: needs.changes.outputs.docs_only != 'true' && needs.changes.outputs.rust_changed != 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Skip Rust jobs for non-Rust change scope
run: echo "No Rust-impacting files changed. Rust lint/test/build skipped."
test:
name: Test
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [lint]
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
docs-quality:
name: Docs Quality
needs: [changes]
if: needs.changes.outputs.docs_changed == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Markdown lint (changed lines only)
env:
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
DOCS_FILES: ${{ needs.changes.outputs.docs_files }}
run: ./scripts/ci/docs_quality_gate.sh
- name: Install mold linker
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Collect added links
id: collect_links
shell: bash
env:
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
DOCS_FILES: ${{ needs.changes.outputs.docs_files }}
run: |
set -euo pipefail
python3 ./scripts/ci/collect_changed_links.py \
--base "$BASE_SHA" \
--docs-files "$DOCS_FILES" \
--output .ci-added-links.txt
count=$(wc -l < .ci-added-links.txt | tr -d ' ')
echo "count=$count" >> "$GITHUB_OUTPUT"
if [ "$count" -gt 0 ]; then
echo "Added links queued for check:"
cat .ci-added-links.txt
else
echo "No added links found in changed docs lines."
fi
- name: Install cargo-nextest
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
- name: Link check (offline, added links only)
if: steps.collect_links.outputs.count != '0'
uses: lycheeverse/lychee-action@a8c4c7cb88f0c7386610c35eb25108e448569cb0 # v2
with:
fail: true
args: >-
--offline
--no-progress
--format detailed
.ci-added-links.txt
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Run tests
run: cargo nextest run --locked
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
- name: Skip link check (no added links)
if: steps.collect_links.outputs.count == '0'
run: echo "No added links in changed docs lines. Link check skipped."
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
timeout-minutes: 40
needs: [lint]
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
- os: macos-14
target: aarch64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
lint-feedback:
name: Lint Feedback
if: github.event_name == 'pull_request'
needs: [changes, lint, docs-quality]
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
pull-requests: write
issues: write
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Install mold linker
if: runner.os == 'Linux'
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Post actionable lint failure summary
if: always()
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
RUST_CHANGED: ${{ needs.changes.outputs.rust_changed }}
DOCS_CHANGED: ${{ needs.changes.outputs.docs_changed }}
LINT_RESULT: ${{ needs.lint.result }}
LINT_DELTA_RESULT: ${{ needs.lint.result }}
DOCS_RESULT: ${{ needs.docs-quality.result }}
with:
script: |
const script = require('./.github/workflows/scripts/lint_feedback.js');
await script({github, context, core});
- name: Ensure web/dist placeholder exists
shell: bash
run: mkdir -p web/dist && touch web/dist/.gitkeep
workflow-owner-approval:
name: Workflow Owner Approval
needs: [changes]
if: github.event_name == 'pull_request' && needs.changes.outputs.workflow_changed == 'true'
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
pull-requests: read
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Build release
shell: bash
run: cargo build --profile ci --locked --target ${{ matrix.target }}
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
- name: Require owner approval for workflow file changes
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
WORKFLOW_OWNER_LOGINS: ${{ vars.WORKFLOW_OWNER_LOGINS }}
with:
script: |
const script = require('./.github/workflows/scripts/ci_workflow_owner_approval.js');
await script({ github, context, core });
docs-quality:
name: Docs Quality
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: actions/setup-node@1d0ff469b7ec7b3cb9d8673fde0c81c44821de2a # v4
with:
node-version: 20
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
license-file-owner-guard:
name: License File Owner Guard
needs: [changes]
if: github.event_name == 'pull_request'
runs-on: blacksmith-2vcpu-ubuntu-2404
permissions:
contents: read
pull-requests: read
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Run docs quality gate
run: bash scripts/ci/docs_quality_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
- name: Enforce owner-only edits for root license files
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const script = require('./.github/workflows/scripts/ci_license_file_owner_guard.js');
await script({ github, context, core });
ci-required:
name: CI Required Gate
if: always()
needs: [changes, lint, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval, license-file-owner-guard]
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Enforce required status
shell: bash
run: |
set -euo pipefail
event_name="${{ github.event_name }}"
rust_changed="${{ needs.changes.outputs.rust_changed }}"
docs_changed="${{ needs.changes.outputs.docs_changed }}"
workflow_changed="${{ needs.changes.outputs.workflow_changed }}"
docs_result="${{ needs.docs-quality.result }}"
workflow_owner_result="${{ needs.workflow-owner-approval.result }}"
license_owner_result="${{ needs.license-file-owner-guard.result }}"
if [ "${{ needs.changes.outputs.docs_only }}" = "true" ]; then
echo "workflow_owner_approval=${workflow_owner_result}"
echo "license_file_owner_guard=${license_owner_result}"
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
echo "Workflow files changed but workflow owner approval gate did not pass."
exit 1
fi
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
echo "License file owner guard did not pass."
exit 1
fi
if [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
echo "Docs-only change detected, but docs-quality did not pass."
exit 1
fi
echo "Docs-only fast path passed."
exit 0
fi
if [ "$rust_changed" != "true" ]; then
echo "rust_changed=false (non-rust fast path)"
echo "workflow_owner_approval=${workflow_owner_result}"
echo "license_file_owner_guard=${license_owner_result}"
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
echo "Workflow files changed but workflow owner approval gate did not pass."
exit 1
fi
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
echo "License file owner guard did not pass."
exit 1
fi
if [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
echo "Non-rust change touched docs, but docs-quality did not pass."
exit 1
fi
echo "Non-rust fast path passed."
exit 0
fi
lint_result="${{ needs.lint.result }}"
lint_strict_delta_result="${{ needs.lint.result }}"
test_result="${{ needs.test.result }}"
build_result="${{ needs.build.result }}"
echo "lint=${lint_result}"
echo "lint_strict_delta=${lint_strict_delta_result}"
echo "test=${test_result}"
echo "build=${build_result}"
echo "docs=${docs_result}"
echo "workflow_owner_approval=${workflow_owner_result}"
echo "license_file_owner_guard=${license_owner_result}"
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
echo "Workflow files changed but workflow owner approval gate did not pass."
exit 1
fi
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
echo "License file owner guard did not pass."
exit 1
fi
if [ "$event_name" = "pull_request" ]; then
if [ "$lint_result" != "success" ] || [ "$lint_strict_delta_result" != "success" ] || [ "$test_result" != "success" ] || [ "$build_result" != "success" ]; then
echo "Required PR CI jobs did not pass."
exit 1
fi
if [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
echo "PR changed docs, but docs-quality did not pass."
exit 1
fi
echo "PR required checks passed."
exit 0
fi
if [ "$lint_result" != "success" ] || [ "$lint_strict_delta_result" != "success" ] || [ "$test_result" != "success" ] || [ "$build_result" != "success" ]; then
echo "Required push CI jobs did not pass."
exit 1
fi
if [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
echo "Push changed docs, but docs-quality did not pass."
exit 1
fi
echo "Push required checks passed."
# Composite status check — branch protection requires this single job.
gate:
name: CI Required Gate
if: always()
needs: [lint, lint-strict-delta, test, build, docs-quality]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
env:
HAS_FAILURE: ${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}
run: |
if [[ "$HAS_FAILURE" == "true" ]]; then
echo "::error::One or more upstream jobs failed or were cancelled"
exit 1
fi
@@ -0,0 +1,77 @@
name: Cross-Platform Build
on:
workflow_dispatch:
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
web:
name: Build Web Dashboard
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
build:
name: Build ${{ matrix.target }}
needs: [web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: macos-15-intel
target: x86_64-apple-darwin
- os: windows-latest
target: x86_64-pc-windows-msvc
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- uses: actions/download-artifact@v8
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
-57
View File
@@ -1,57 +0,0 @@
name: Feature Matrix
on:
schedule:
- cron: "30 4 * * 1" # Weekly Monday 4:30am UTC
workflow_dispatch:
concurrency:
group: feature-matrix-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
jobs:
feature-check:
name: Check (${{ matrix.name }})
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
include:
- name: no-default-features
args: --no-default-features
install_libudev: false
- name: all-features
args: --all-features
install_libudev: true
- name: hardware-only
args: --no-default-features --features hardware
install_libudev: false
- name: browser-native
args: --no-default-features --features browser-native
install_libudev: false
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
with:
key: features-${{ matrix.name }}
- name: Install Linux system dependencies for all-features
if: matrix.install_libudev
run: |
sudo apt-get update
sudo apt-get install -y --no-install-recommends libudev-dev pkg-config
- name: Check feature combination
run: cargo check --locked ${{ matrix.args }}
-239
View File
@@ -1,239 +0,0 @@
# Main Branch Delivery Flows
This document explains what runs when code is proposed to `dev`, promoted to `main`, and released.
Use this with:
- [`docs/ci-map.md`](../../docs/ci-map.md)
- [`docs/pr-workflow.md`](../../docs/pr-workflow.md)
- [`docs/release-process.md`](../../docs/release-process.md)
## Event Summary
| Event | Main workflows |
| --- | --- |
| PR activity (`pull_request_target`) | `pr-intake-checks.yml`, `pr-labeler.yml`, `pr-auto-response.yml` |
| PR activity (`pull_request`) | `ci-run.yml`, `sec-audit.yml`, `main-promotion-gate.yml` (for `main` PRs), plus path-scoped workflows |
| Push to `dev`/`main` | `ci-run.yml`, `sec-audit.yml`, plus path-scoped workflows |
| Tag push (`v*`) | `pub-release.yml` publish mode, `pub-docker-img.yml` publish job |
| Scheduled/manual | `pub-release.yml` verification mode, `pub-homebrew-core.yml` (manual), `sec-codeql.yml`, `feature-matrix.yml`, `test-fuzz.yml`, `pr-check-stale.yml`, `pr-check-status.yml`, `sync-contributors.yml`, `test-benchmarks.yml`, `test-e2e.yml` |
## Runtime and Docker Matrix
Observed averages below are from recent completed runs (sampled from GitHub Actions on February 17, 2026). Values are directional, not SLA.
| Workflow | Typical trigger in main flow | Avg runtime | Docker build? | Docker run? | Docker push? |
| --- | --- | ---:| --- | --- | --- |
| `pr-intake-checks.yml` | PR open/update (`pull_request_target`) | 14.5s | No | No | No |
| `pr-labeler.yml` | PR open/update (`pull_request_target`) | 53.7s | No | No | No |
| `pr-auto-response.yml` | PR/issue automation | 24.3s | No | No | No |
| `ci-run.yml` | PR + push to `dev`/`main` | 74.7s | No | No | No |
| `sec-audit.yml` | PR + push to `dev`/`main` | 127.2s | No | No | No |
| `workflow-sanity.yml` | Workflow-file changes | 34.2s | No | No | No |
| `pr-label-policy-check.yml` | Label policy/automation changes | 14.7s | No | No | No |
| `pub-docker-img.yml` (`pull_request`) | Docker build-input PR changes | 240.4s | Yes | Yes | No |
| `pub-docker-img.yml` (`push`) | tag push `v*` | 139.9s | Yes | No | Yes |
| `pub-release.yml` | Tag push `v*` (publish) + manual/scheduled verification (no publish) | N/A in recent sample | No | No | No |
| `pub-homebrew-core.yml` | Manual workflow dispatch only | N/A in recent sample | No | No | No |
Notes:
1. `pub-docker-img.yml` is the only workflow in the main PR/push path that builds Docker images.
2. Container runtime verification (`docker run`) occurs in PR smoke only.
3. Container registry push occurs on tag pushes (`v*`) only.
4. `ci-run.yml` "Build (Smoke)" builds Rust binaries, not Docker images.
## Step-By-Step
### 1) PR from branch in this repository -> `dev`
1. Contributor opens or updates PR against `dev`.
2. `pull_request_target` automation runs (typical runtime):
- `pr-intake-checks.yml` posts intake warnings/errors.
- `pr-labeler.yml` sets size/risk/scope labels.
- `pr-auto-response.yml` runs first-interaction and label routes.
3. `pull_request` CI workflows start:
- `ci-run.yml`
- `sec-audit.yml`
- path-scoped workflows if matching files changed:
- `pub-docker-img.yml` (Docker build-input paths only)
- `workflow-sanity.yml` (workflow files only)
- `pr-label-policy-check.yml` (label-policy files only)
4. In `ci-run.yml`, `changes` computes:
- `docs_only`
- `docs_changed`
- `rust_changed`
- `workflow_changed`
5. `build` runs for Rust-impacting changes.
6. On PRs, full lint/test/docs checks run when PR has label `ci:full`:
- `lint`
- `lint-strict-delta`
- `test`
- `docs-quality`
7. If `.github/workflows/**` changed, `workflow-owner-approval` must pass.
8. If root license files (`LICENSE-APACHE`, `LICENSE-MIT`) changed, `license-file-owner-guard` allows only PR author `willsarg`.
9. `lint-feedback` posts actionable comment if lint/docs gates fail.
10. `CI Required Gate` aggregates results to final pass/fail.
11. Maintainer merges PR once checks and review policy are satisfied.
12. Merge emits a `push` event on `dev` (see scenario 4).
### 2) PR from fork -> `dev`
1. External contributor opens PR from `fork/<branch>` into `zeroclaw:dev`.
2. Immediately on `opened`:
- `pull_request_target` workflows start with base-repo context and base-repo token:
- `pr-intake-checks.yml`
- `pr-labeler.yml`
- `pr-auto-response.yml`
- `pull_request` workflows are queued for the fork head commit:
- `ci-run.yml`
- `sec-audit.yml`
- path-scoped workflows (`pub-docker-img.yml`, `workflow-sanity.yml`, `pr-label-policy-check.yml`) if changed files match.
3. Fork-specific permission behavior in `pull_request` workflows:
- token is restricted (read-focused), so jobs that try to write PR comments/status extras can be limited.
- secrets from the base repo are not exposed to fork PR `pull_request` jobs.
4. Approval gate possibility:
- if Actions settings require maintainer approval for fork workflows, the `pull_request` run stays in `action_required`/waiting state until approved.
5. Event fan-out after labeling:
- `pr-labeler.yml` and manual label changes emit `labeled`/`unlabeled` events.
- those events retrigger `pull_request_target` automation (`pr-labeler.yml` and `pr-auto-response.yml`), creating extra run volume/noise.
6. When contributor pushes new commits to fork branch (`synchronize`):
- reruns: `pr-intake-checks.yml`, `pr-labeler.yml`, `ci-run.yml`, `sec-audit.yml`, and matching path-scoped PR workflows.
- does not rerun `pr-auto-response.yml` unless label/open events occur.
7. `ci-run.yml` execution details for fork PR:
- `changes` computes `docs_only`, `docs_changed`, `rust_changed`, `workflow_changed`.
- `build` runs for Rust-impacting changes.
- `lint`/`lint-strict-delta`/`test`/`docs-quality` run on PR when `ci:full` label exists.
- `workflow-owner-approval` runs when `.github/workflows/**` changed.
- `CI Required Gate` emits final pass/fail for the PR head.
8. Fork PR merge blockers to check first when diagnosing stalls:
- run approval pending for fork workflows.
- `workflow-owner-approval` failing on workflow-file changes.
- `license-file-owner-guard` failing when root license files are modified by non-owner PR author.
- `CI Required Gate` failure caused by upstream jobs.
- repeated `pull_request_target` reruns from label churn causing noisy signals.
9. After merge, normal `push` workflows on `dev` execute (scenario 4).
### 3) Promotion PR `dev` -> `main`
1. Maintainer opens PR with head `dev` and base `main`.
2. `main-promotion-gate.yml` runs and fails unless PR author is `willsarg` or `theonlyhennygod`.
3. `main-promotion-gate.yml` also fails if head repo/branch is not `<this-repo>:dev`.
4. `ci-run.yml` and `sec-audit.yml` run on the promotion PR.
5. Maintainer merges PR once checks and review policy pass.
6. Merge emits a `push` event on `main`.
### 4) Push to `dev` or `main` (including after merge)
1. Commit reaches `dev` or `main` (usually from a merged PR).
2. `ci-run.yml` runs on `push`.
3. `sec-audit.yml` runs on `push`.
4. Path-filtered workflows run only if touched files match their filters.
5. In `ci-run.yml`, push behavior differs from PR behavior:
- Rust path: `lint`, `lint-strict-delta`, `test`, `build` are expected.
- Docs/non-rust paths: fast-path behavior applies.
6. `CI Required Gate` computes overall push result.
## Docker Publish Logic
Workflow: `.github/workflows/pub-docker-img.yml`
### PR behavior
1. Triggered on `pull_request` to `dev` or `main` when Docker build-input paths change.
2. Runs `PR Docker Smoke` job:
- Builds local smoke image with Blacksmith builder.
- Verifies container with `docker run ... --version`.
3. Typical runtime in recent sample: ~240.4s.
4. No registry push happens on PR events.
### Push behavior
1. `publish` job runs on tag pushes `v*` only.
2. Workflow trigger includes semantic version tag pushes (`v*`) only.
3. Login to `ghcr.io` uses `${{ github.actor }}` and `${{ secrets.GITHUB_TOKEN }}`.
4. Tag computation includes semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag.
5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`).
6. Typical runtime in recent sample: ~139.9s.
7. Result: pushed image tags under `ghcr.io/<owner>/<repo>`.
Important: Docker publish now requires a `v*` tag push; regular `dev`/`main` branch pushes do not publish images.
## Release Logic
Workflow: `.github/workflows/pub-release.yml`
1. Trigger modes:
- Tag push `v*` -> publish mode.
- Manual dispatch -> verification-only or publish mode (input-driven).
- Weekly schedule -> verification-only mode.
2. `prepare` resolves release context (`release_ref`, `release_tag`, publish/draft mode) and validates manual publish inputs.
- publish mode enforces `release_tag` == `Cargo.toml` version at the tag commit.
3. `build-release` builds matrix artifacts across Linux/macOS/Windows targets.
4. `verify-artifacts` enforces presence of all expected archives before any publish attempt.
5. In publish mode, workflow generates SBOM (`CycloneDX` + `SPDX`), `SHA256SUMS`, keyless cosign signatures, and verifies GHCR release-tag availability.
6. In publish mode, workflow creates/updates the GitHub Release for the resolved tag and commit-ish.
Manual Homebrew formula flow:
1. Run `.github/workflows/pub-homebrew-core.yml` with `release_tag=vX.Y.Z`.
2. Use `dry_run=true` first to validate formula patch and metadata.
3. Use `dry_run=false` to push from bot fork and open `homebrew-core` PR.
## Merge/Policy Notes
1. Workflow-file changes (`.github/workflows/**`) activate owner-approval gate in `ci-run.yml`.
2. PR lint/test strictness is intentionally controlled by `ci:full` label.
3. `sec-audit.yml` runs on both PR and push, plus scheduled weekly.
4. Some workflows are operational and non-merge-path (`pr-check-stale`, `pr-check-status`, `sync-contributors`, etc.).
5. Workflow-specific JavaScript helpers are organized under `.github/workflows/scripts/`.
## Mermaid Diagrams
### PR to Dev
```mermaid
flowchart TD
A["PR opened or updated -> dev"] --> B["pull_request_target lane"]
B --> B1["pr-intake-checks.yml"]
B --> B2["pr-labeler.yml"]
B --> B3["pr-auto-response.yml"]
A --> C["pull_request CI lane"]
C --> C1["ci-run.yml"]
C --> C2["sec-audit.yml"]
C --> C3["pub-docker-img.yml (if Docker paths changed)"]
C --> C4["workflow-sanity.yml (if workflow files changed)"]
C --> C5["pr-label-policy-check.yml (if policy files changed)"]
C1 --> D["CI Required Gate"]
D --> E{"Checks + review policy pass?"}
E -->|No| F["PR stays open"]
E -->|Yes| G["Merge PR"]
G --> H["push event on dev"]
```
### Promotion and Release
```mermaid
flowchart TD
D0["Commit reaches dev"] --> B0["ci-run.yml"]
D0 --> C0["sec-audit.yml"]
P["Promotion PR dev -> main"] --> PG["main-promotion-gate.yml"]
PG --> M["Merge to main"]
M --> A["Commit reaches main"]
A --> B["ci-run.yml"]
A --> C["sec-audit.yml"]
A --> D["path-scoped workflows (if matched)"]
T["Tag push v*"] --> R["pub-release.yml"]
W["Manual/Scheduled release verify"] --> R
T --> P["pub-docker-img.yml publish job"]
R --> R1["Artifacts + SBOM + checksums + signatures + GitHub Release"]
W --> R2["Verification build only (no GitHub Release publish)"]
P --> P1["Push ghcr image tags (version + sha)"]
```
## Quick Troubleshooting
1. Unexpected skipped jobs: inspect `scripts/ci/detect_change_scope.sh` outputs.
2. Workflow-change PR blocked: verify `WORKFLOW_OWNER_LOGINS` and approvals.
3. Fork PR appears stalled: check whether Actions run approval is pending.
4. Docker not published: confirm a `v*` tag was pushed to the intended commit.
-55
View File
@@ -1,55 +0,0 @@
name: Main Promotion Gate
on:
pull_request:
branches: [main]
concurrency:
group: main-promotion-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
jobs:
enforce-dev-promotion:
name: Enforce Dev -> Main Promotion
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Validate PR source branch
shell: bash
env:
HEAD_REF: ${{ github.head_ref }}
HEAD_REPO: ${{ github.event.pull_request.head.repo.full_name }}
BASE_REPO: ${{ github.repository }}
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
run: |
set -euo pipefail
pr_author_lc="$(echo "${PR_AUTHOR}" | tr '[:upper:]' '[:lower:]')"
allowed_authors=("willsarg" "theonlyhennygod")
is_allowed_author=false
for allowed in "${allowed_authors[@]}"; do
if [[ "$pr_author_lc" == "$allowed" ]]; then
is_allowed_author=true
break
fi
done
if [[ "$is_allowed_author" != "true" ]]; then
echo "::error::PRs into main are restricted to: willsarg, theonlyhennygod. PR author: ${PR_AUTHOR}. Open this PR against dev instead."
exit 1
fi
if [[ "$HEAD_REPO" != "$BASE_REPO" ]]; then
echo "::error::PRs into main must originate from ${BASE_REPO}:dev or ${BASE_REPO}:release/*. Current head repo: ${HEAD_REPO}."
exit 1
fi
if [[ "$HEAD_REF" != "dev" && ! "$HEAD_REF" =~ ^release/ ]]; then
echo "::error::PRs into main must use head branch 'dev' or 'release/*'. Current head branch: ${HEAD_REF}."
exit 1
fi
echo "Promotion policy satisfied: author=${PR_AUTHOR}, source=${HEAD_REPO}:${HEAD_REF} -> main"
+130
View File
@@ -0,0 +1,130 @@
# Master Branch Delivery Flows
This document explains what runs when code is proposed to `master` and released.
Use this with:
- [`docs/ci-map.md`](../../docs/contributing/ci-map.md)
- [`docs/pr-workflow.md`](../../docs/contributing/pr-workflow.md)
- [`docs/release-process.md`](../../docs/contributing/release-process.md)
## Branching Model
ZeroClaw uses a single default branch: `master`. All contributor PRs target `master` directly. There is no `dev` or promotion branch.
Current maintainers with PR approval authority: `theonlyhennygod`, `JordanTheJet`, and `SimianAstronaut7`.
## Active Workflows
| File | Trigger | Purpose |
| --- | --- | --- |
| `checks-on-pr.yml` | `pull_request``master` | Lint + test + build + security audit on every PR |
| `cross-platform-build-manual.yml` | `workflow_dispatch` | Full platform build matrix (manual) |
| `release-beta-on-push.yml` | `push``master` | Beta release on every master commit |
| `release-stable-manual.yml` | `workflow_dispatch` | Stable release (manual, version-gated) |
## Event Summary
| Event | Workflows triggered |
| --- | --- |
| PR opened or updated against `master` | `checks-on-pr.yml` |
| Push to `master` (including after merge) | `release-beta-on-push.yml` |
| Manual dispatch | `cross-platform-build-manual.yml`, `release-stable-manual.yml` |
## Step-By-Step
### 1) PR → `master`
1. Contributor opens or updates a PR against `master`.
2. `checks-on-pr.yml` starts:
- `lint` job: runs `cargo fmt --check` and `cargo clippy -D warnings`.
- `test` job: runs `cargo nextest run --locked` on `ubuntu-latest` with Rust 1.92.0 and mold linker.
- `build` job (matrix): compiles release binary on `x86_64-unknown-linux-gnu` and `aarch64-apple-darwin`.
- `security` job: runs `cargo audit` and `cargo deny check licenses sources`.
- Concurrency group cancels in-progress runs for the same PR on new pushes.
3. All jobs must pass before merge.
4. Maintainer (`theonlyhennygod`, `JordanTheJet`, or `SimianAstronaut7`) merges PR once checks and review policy are satisfied.
5. Merge emits a `push` event on `master` (see section 2).
### 2) Push to `master` (including after merge)
1. Commit reaches `master`.
2. `release-beta-on-push.yml` (Release Beta) starts:
- `version` job: computes beta tag as `v{cargo_version}-beta.{run_number}`.
- `build` job (matrix, 4 targets): `x86_64-linux`, `aarch64-linux`, `aarch64-darwin`, `x86_64-windows`.
- `publish` job: generates `SHA256SUMS`, creates a GitHub pre-release with all artifacts. Artifact retention: 7 days.
- `docker` job: builds multi-platform image (`linux/amd64,linux/arm64`) and pushes to `ghcr.io` with `:beta` and the versioned beta tag.
3. This runs on every push to `master` without filtering. Every merged PR produces a beta pre-release.
### 3) Stable Release (manual)
1. Maintainer runs `release-stable-manual.yml` via `workflow_dispatch` with a version input (e.g. `0.2.0`).
2. `validate` job checks:
- Input matches semver `X.Y.Z` format.
- `Cargo.toml` version matches input exactly.
- Tag `vX.Y.Z` does not already exist on the remote.
3. `build` job (matrix, same 4 targets as beta): compiles release binary.
4. `publish` job: generates `SHA256SUMS`, creates a stable GitHub Release (not pre-release). Artifact retention: 14 days.
5. `docker` job: pushes to `ghcr.io` with `:latest` and `:vX.Y.Z`.
### 4) Full Platform Build (manual)
1. Maintainer runs `cross-platform-build-manual.yml` via `workflow_dispatch`.
2. `build` job (matrix, 3 targets): `aarch64-linux-gnu`, `x86_64-darwin` (macOS 15 Intel), `x86_64-windows-msvc`.
3. Build-only, no tests, no publish. Used to verify cross-compilation on platforms not covered by `checks-on-pr.yml`.
## Build Targets by Workflow
| Target | `checks-on-pr.yml` | `cross-platform-build-manual.yml` | `release-beta-on-push.yml` | `release-stable-manual.yml` |
| --- | :---: | :---: | :---: | :---: |
| `x86_64-unknown-linux-gnu` | ✓ | | ✓ | ✓ |
| `aarch64-unknown-linux-gnu` | | ✓ | ✓ | ✓ |
| `aarch64-apple-darwin` | ✓ | | ✓ | ✓ |
| `x86_64-apple-darwin` | | ✓ | | |
| `x86_64-pc-windows-msvc` | ✓ | ✓ | ✓ | ✓ |
## Mermaid Diagrams
### PR to Master
```mermaid
flowchart TD
A["PR opened or updated → master"] --> B["checks-on-pr.yml"]
B --> B0["lint: fmt + clippy"]
B --> B1["test: cargo nextest (ubuntu-latest)"]
B --> B2["build: x86_64-linux + aarch64-darwin"]
B --> B3["security: audit + deny"]
B0 & B1 & B2 & B3 --> C{"Checks pass?"}
C -->|No| D["PR stays open"]
C -->|Yes| E["Maintainer merges"]
E --> F["push event on master"]
```
### Beta Release (on every master push)
```mermaid
flowchart TD
A["Push to master"] --> B["release-beta-on-push.yml"]
B --> B1["version: compute v{x.y.z}-beta.{N}"]
B1 --> B2["build: 4 targets"]
B2 --> B3["publish: GitHub pre-release + SHA256SUMS"]
B2 --> B4["docker: push ghcr.io :beta + versioned tag"]
```
### Stable Release (manual)
```mermaid
flowchart TD
A["workflow_dispatch: version=X.Y.Z"] --> B["release-stable-manual.yml"]
B --> B1["validate: semver + Cargo.toml + tag uniqueness"]
B1 --> B2["build: 4 targets"]
B2 --> B3["publish: GitHub stable release + SHA256SUMS"]
B2 --> B4["docker: push ghcr.io :latest + :vX.Y.Z"]
```
## Quick Troubleshooting
1. **Quality gate failing on PR**: check `lint` job for formatting/clippy issues; check `test` job for test failures; check `build` job for compile errors; check `security` job for audit/deny failures.
2. **Beta release not appearing**: confirm the push landed on `master` (not another branch); check `release-beta-on-push.yml` run status.
3. **Stable release failing at validate**: ensure `Cargo.toml` version matches the input version and the tag does not already exist.
4. **Full matrix build needed**: run `cross-platform-build-manual.yml` manually from the Actions tab.
-86
View File
@@ -1,86 +0,0 @@
name: PR Auto Responder
on:
issues:
types: [opened, reopened, labeled, unlabeled]
pull_request_target:
branches: [dev, main]
types: [opened, labeled, unlabeled]
permissions: {}
env:
LABEL_POLICY_PATH: .github/label-policy.json
jobs:
contributor-tier-issues:
if: >-
(github.event_name == 'issues' &&
(github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled' || github.event.action == 'unlabeled')) ||
(github.event_name == 'pull_request_target' &&
(github.event.action == 'labeled' || github.event.action == 'unlabeled'))
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Apply contributor tier label for issue author
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
LABEL_POLICY_PATH: .github/label-policy.json
with:
script: |
const script = require('./.github/workflows/scripts/pr_auto_response_contributor_tier.js');
await script({ github, context, core });
first-interaction:
if: github.event.action == 'opened'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- name: Greet first-time contributors
uses: actions/first-interaction@a1db7729b356323c7988c20ed6f0d33fe31297be # v1
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}
issue_message: |
Thanks for opening this issue.
Before maintainers triage it, please confirm:
- Repro steps are complete and run on latest `main`
- Environment details are included (OS, Rust version, ZeroClaw version)
- Sensitive values are redacted
This helps us keep issue throughput high and response latency low.
pr_message: |
Thanks for contributing to ZeroClaw.
For faster review, please ensure:
- PR template sections are fully completed
- `cargo fmt --all -- --check`, `cargo clippy --all-targets -- -D warnings`, and `cargo test` are included
- If automation/agents were used heavily, add brief workflow notes
- Scope is focused (prefer one concern per PR)
See `CONTRIBUTING.md` and `docs/pr-workflow.md` for full collaboration rules.
labeled-routes:
if: github.event.action == 'labeled'
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Handle label-driven responses
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const script = require('./.github/workflows/scripts/pr_auto_response_labeled_routes.js');
await script({ github, context, core });
-44
View File
@@ -1,44 +0,0 @@
name: PR Check Stale
on:
schedule:
- cron: "20 2 * * *"
workflow_dispatch:
permissions: {}
jobs:
stale:
permissions:
issues: write
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Mark stale issues and pull requests
uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
days-before-issue-stale: 21
days-before-issue-close: 7
days-before-pr-stale: 14
days-before-pr-close: 7
stale-issue-label: stale
stale-pr-label: stale
exempt-issue-labels: security,pinned,no-stale,no-pr-hygiene,maintainer
exempt-pr-labels: no-stale,no-pr-hygiene,maintainer
remove-stale-when-updated: true
exempt-all-assignees: true
operations-per-run: 300
stale-issue-message: |
This issue was automatically marked as stale due to inactivity.
Please provide an update, reproduction details, or current status to keep it open.
close-issue-message: |
Closing this issue due to inactivity.
If the problem still exists on the latest `main`, please open a new issue with fresh repro steps.
close-issue-reason: not_planned
stale-pr-message: |
This PR was automatically marked as stale due to inactivity.
Please rebase/update and post the latest validation results.
close-pr-message: |
Closing this PR due to inactivity.
Maintainers can reopen once the branch is updated and validation is provided.
-32
View File
@@ -1,32 +0,0 @@
name: PR Check Status
on:
schedule:
- cron: "15 8 * * *" # Once daily at 8:15am UTC
workflow_dispatch:
permissions: {}
concurrency:
group: pr-check-status
cancel-in-progress: true
jobs:
nudge-stale-prs:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
issues: write
env:
STALE_HOURS: "48"
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Nudge PRs that need rebase or CI refresh
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const script = require('./.github/workflows/scripts/pr_check_status_nudge.js');
await script({ github, context, core });
-31
View File
@@ -1,31 +0,0 @@
name: PR Intake Checks
on:
pull_request_target:
branches: [dev, main]
types: [opened, reopened, synchronize, edited, ready_for_review]
concurrency:
group: pr-intake-checks-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
issues: write
jobs:
intake:
name: Intake Checks
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Run safe PR intake checks
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const script = require('./.github/workflows/scripts/pr_intake_checks.js');
await script({ github, context, core });
@@ -1,74 +0,0 @@
name: PR Label Policy Check
on:
pull_request:
paths:
- ".github/label-policy.json"
- ".github/workflows/pr-labeler.yml"
- ".github/workflows/pr-auto-response.yml"
push:
paths:
- ".github/label-policy.json"
- ".github/workflows/pr-labeler.yml"
- ".github/workflows/pr-auto-response.yml"
concurrency:
group: pr-label-policy-check-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
jobs:
contributor-tier-consistency:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Verify shared label policy and workflow wiring
shell: bash
run: |
set -euo pipefail
python3 - <<'PY'
import json
import re
from pathlib import Path
policy_path = Path('.github/label-policy.json')
policy = json.loads(policy_path.read_text(encoding='utf-8'))
color = str(policy.get('contributor_tier_color', '')).upper()
rules = policy.get('contributor_tiers', [])
if not re.fullmatch(r'[0-9A-F]{6}', color):
raise SystemExit('invalid contributor_tier_color in .github/label-policy.json')
if not rules:
raise SystemExit('contributor_tiers must not be empty in .github/label-policy.json')
labels = set()
prev_min = None
for entry in rules:
label = str(entry.get('label', '')).strip().lower()
min_merged = int(entry.get('min_merged_prs', 0))
if not label.endswith('contributor'):
raise SystemExit(f'invalid contributor tier label: {label}')
if label in labels:
raise SystemExit(f'duplicate contributor tier label: {label}')
if prev_min is not None and min_merged > prev_min:
raise SystemExit('contributor_tiers must be sorted descending by min_merged_prs')
labels.add(label)
prev_min = min_merged
workflow_paths = [
Path('.github/workflows/pr-labeler.yml'),
Path('.github/workflows/pr-auto-response.yml'),
]
for workflow in workflow_paths:
text = workflow.read_text(encoding='utf-8')
if '.github/label-policy.json' not in text:
raise SystemExit(f'{workflow} must load .github/label-policy.json')
if re.search(r'contributorTierColor\s*=\s*"[0-9A-Fa-f]{6}"', text):
raise SystemExit(f'{workflow} contains hardcoded contributorTierColor')
print('label policy file is valid and workflow consumers are wired to shared policy')
PY
-53
View File
@@ -1,53 +0,0 @@
name: PR Labeler
on:
pull_request_target:
branches: [dev, main]
types: [opened, reopened, synchronize, edited, labeled, unlabeled]
workflow_dispatch:
inputs:
mode:
description: "Run mode for managed-label governance"
required: true
default: "audit"
type: choice
options:
- audit
- repair
concurrency:
group: pr-labeler-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
issues: write
env:
LABEL_POLICY_PATH: .github/label-policy.json
jobs:
label:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Apply path labels
if: github.event_name == 'pull_request_target'
uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
continue-on-error: true
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: true
- name: Apply size/risk/module labels
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
continue-on-error: true
env:
LABEL_POLICY_PATH: .github/label-policy.json
with:
script: |
const script = require('./.github/workflows/scripts/pr_labeler.js');
await script({ github, context, core });
-175
View File
@@ -1,175 +0,0 @@
name: Pub Docker Img
on:
push:
tags: ["v*"]
pull_request:
branches: [dev, main]
paths:
- "Dockerfile"
- ".dockerignore"
- "docker-compose.yml"
- "rust-toolchain.toml"
- "dev/config.template.toml"
- ".github/workflows/pub-docker-img.yml"
workflow_dispatch:
concurrency:
group: docker-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
pr-smoke:
name: PR Docker Smoke
if: github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository)
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 25
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Setup Blacksmith Builder
uses: useblacksmith/setup-docker-builder@ef12d5b165b596e3aa44ea8198d8fde563eab402 # v1
- name: Extract metadata (tags, labels)
if: github.event_name == 'pull_request'
id: meta
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=pr
- name: Build smoke image
uses: useblacksmith/build-push-action@30c71162f16ea2c27c3e21523255d209b8b538c1 # v2
with:
context: .
push: false
load: true
provenance: false
sbom: false
tags: zeroclaw-pr-smoke:latest
labels: ${{ steps.meta.outputs.labels || '' }}
platforms: linux/amd64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Verify image
run: docker run --rm zeroclaw-pr-smoke:latest --version
publish:
name: Build and Push Docker Image
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && github.repository == 'zeroclaw-labs/zeroclaw'
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 45
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Setup Blacksmith Builder
uses: useblacksmith/setup-docker-builder@ef12d5b165b596e3aa44ea8198d8fde563eab402 # v1
- name: Log in to Container Registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Compute tags
id: meta
shell: bash
run: |
set -euo pipefail
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
SHA_TAG="${IMAGE}:sha-${GITHUB_SHA::12}"
if [[ "${GITHUB_REF}" != refs/tags/v* ]]; then
echo "::error::Docker publish is restricted to v* tag pushes."
exit 1
fi
TAG_NAME="${GITHUB_REF#refs/tags/}"
TAGS="${IMAGE}:${TAG_NAME},${SHA_TAG}"
echo "tags=${TAGS}" >> "$GITHUB_OUTPUT"
- name: Build and push Docker image
uses: useblacksmith/build-push-action@30c71162f16ea2c27c3e21523255d209b8b538c1 # v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Set GHCR package visibility to public
shell: bash
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
owner="${GITHUB_REPOSITORY_OWNER,,}"
repo="${GITHUB_REPOSITORY#*/}"
# Package path can vary depending on repository/package linkage.
candidates=(
"$repo"
"${owner}%2F${repo}"
)
for scope in orgs users; do
for pkg in "${candidates[@]}"; do
code="$(curl -sS -o /tmp/ghcr-visibility.json -w "%{http_code}" \
-X PATCH \
-H "Authorization: Bearer ${GH_TOKEN}" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"https://api.github.com/${scope}/${owner}/packages/container/${pkg}/visibility" \
-d '{"visibility":"public"}' || true)"
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
echo "GHCR package visibility is public (${scope}/${owner}/${pkg})."
exit 0
fi
echo "Visibility attempt ${scope}/${owner}/${pkg} returned HTTP ${code}."
done
done
echo "::warning::Unable to update GHCR visibility via API in this run; proceeding to direct anonymous pull verification."
- name: Verify anonymous GHCR pull access
shell: bash
run: |
set -euo pipefail
TAG_NAME="${GITHUB_REF#refs/tags/}"
token_resp="$(curl -sS "https://ghcr.io/token?scope=repository:${GITHUB_REPOSITORY}:pull")"
token="$(echo "$token_resp" | sed -n 's/.*"token":"\([^"]*\)".*/\1/p')"
if [ -z "$token" ]; then
echo "::error::Anonymous GHCR token request failed: $token_resp"
exit 1
fi
code="$(curl -sS -o /tmp/ghcr-manifest.json -w "%{http_code}" \
-H "Authorization: Bearer ${token}" \
-H "Accept: application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json" \
"https://ghcr.io/v2/${GITHUB_REPOSITORY}/manifests/${TAG_NAME}")"
if [ "$code" != "200" ]; then
echo "::error::Anonymous manifest pull failed with HTTP ${code}"
cat /tmp/ghcr-manifest.json || true
exit 1
fi
echo "Anonymous GHCR pull access verified."
-221
View File
@@ -1,221 +0,0 @@
name: Pub Homebrew Core
on:
workflow_dispatch:
inputs:
release_tag:
description: "Existing release tag to publish (vX.Y.Z)"
required: true
type: string
dry_run:
description: "Patch formula only (no push/PR)"
required: false
default: true
type: boolean
concurrency:
group: homebrew-core-${{ github.run_id }}
cancel-in-progress: false
permissions:
contents: read
jobs:
publish-homebrew-core:
name: Publish Homebrew Core PR
runs-on: blacksmith-2vcpu-ubuntu-2404
env:
UPSTREAM_REPO: Homebrew/homebrew-core
FORMULA_PATH: Formula/z/zeroclaw.rb
RELEASE_TAG: ${{ inputs.release_tag }}
DRY_RUN: ${{ inputs.dry_run }}
BOT_FORK_REPO: ${{ vars.HOMEBREW_CORE_BOT_FORK_REPO }}
BOT_EMAIL: ${{ vars.HOMEBREW_CORE_BOT_EMAIL }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Validate release tag and version alignment
id: release_meta
shell: bash
run: |
set -euo pipefail
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
if [[ ! "$RELEASE_TAG" =~ $semver_pattern ]]; then
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])."
exit 1
fi
if ! git rev-parse "refs/tags/${RELEASE_TAG}" >/dev/null 2>&1; then
git fetch --tags origin
fi
tag_version="${RELEASE_TAG#v}"
cargo_version="$(git show "${RELEASE_TAG}:Cargo.toml" | sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
if [[ -z "$cargo_version" ]]; then
echo "::error::Unable to read Cargo.toml version from tag ${RELEASE_TAG}."
exit 1
fi
if [[ "$cargo_version" != "$tag_version" ]]; then
echo "::error::Tag ${RELEASE_TAG} does not match Cargo.toml version (${cargo_version})."
echo "::error::Bump Cargo.toml first, then publish Homebrew."
exit 1
fi
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
{
echo "tag_version=$tag_version"
echo "tarball_url=$tarball_url"
echo "tarball_sha=$tarball_sha"
} >> "$GITHUB_OUTPUT"
{
echo "### Release Metadata"
echo "- release_tag: ${RELEASE_TAG}"
echo "- cargo_version: ${cargo_version}"
echo "- tarball_sha256: ${tarball_sha}"
echo "- dry_run: ${DRY_RUN}"
} >> "$GITHUB_STEP_SUMMARY"
- name: Patch Homebrew formula
id: patch_formula
shell: bash
env:
HOMEBREW_CORE_BOT_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
run: |
set -euo pipefail
tmp_repo="$(mktemp -d)"
echo "tmp_repo=$tmp_repo" >> "$GITHUB_OUTPUT"
if [[ "$DRY_RUN" == "true" ]]; then
git clone --depth=1 "https://github.com/${UPSTREAM_REPO}.git" "$tmp_repo/homebrew-core"
else
if [[ -z "${BOT_FORK_REPO}" ]]; then
echo "::error::Repository variable HOMEBREW_CORE_BOT_FORK_REPO is required when dry_run=false."
exit 1
fi
if [[ -z "${HOMEBREW_CORE_BOT_TOKEN}" ]]; then
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is required when dry_run=false."
exit 1
fi
if [[ "$BOT_FORK_REPO" != */* ]]; then
echo "::error::HOMEBREW_CORE_BOT_FORK_REPO must be in owner/repo format."
exit 1
fi
if ! command -v gh >/dev/null 2>&1; then
echo "::error::gh CLI is required on the runner."
exit 1
fi
if [[ -z "${GH_TOKEN:-}" ]]; then
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is missing."
exit 1
fi
if ! gh api "repos/${BOT_FORK_REPO}" >/dev/null 2>&1; then
echo "::error::HOMEBREW_CORE_BOT_TOKEN cannot access ${BOT_FORK_REPO}."
exit 1
fi
gh repo clone "${BOT_FORK_REPO}" "$tmp_repo/homebrew-core" -- --depth=1
fi
repo_dir="$tmp_repo/homebrew-core"
formula_file="$repo_dir/$FORMULA_PATH"
if [[ ! -f "$formula_file" ]]; then
echo "::error::Formula file not found: $FORMULA_PATH"
exit 1
fi
if [[ "$DRY_RUN" == "false" ]]; then
if git -C "$repo_dir" remote get-url upstream >/dev/null 2>&1; then
git -C "$repo_dir" remote set-url upstream "https://github.com/${UPSTREAM_REPO}.git"
else
git -C "$repo_dir" remote add upstream "https://github.com/${UPSTREAM_REPO}.git"
fi
if git -C "$repo_dir" ls-remote --exit-code --heads upstream main >/dev/null 2>&1; then
upstream_ref="main"
else
upstream_ref="master"
fi
git -C "$repo_dir" fetch --depth=1 upstream "$upstream_ref"
branch_name="zeroclaw-${RELEASE_TAG}-${GITHUB_RUN_ID}"
git -C "$repo_dir" checkout -B "$branch_name" "upstream/$upstream_ref"
echo "branch_name=$branch_name" >> "$GITHUB_OUTPUT"
fi
tarball_url="${{ steps.release_meta.outputs.tarball_url }}"
tarball_sha="${{ steps.release_meta.outputs.tarball_sha }}"
perl -0pi -e "s|^ url \".*\"| url \"${tarball_url}\"|m" "$formula_file"
perl -0pi -e "s|^ sha256 \".*\"| sha256 \"${tarball_sha}\"|m" "$formula_file"
perl -0pi -e "s|^ license \".*\"| license \"Apache-2.0 OR MIT\"|m" "$formula_file"
perl -0pi -e 's|^ head "https://github\.com/zeroclaw-labs/zeroclaw\.git".*| head "https://github.com/zeroclaw-labs/zeroclaw.git"|m' "$formula_file"
git -C "$repo_dir" diff -- "$FORMULA_PATH" > "$tmp_repo/formula.diff"
if [[ ! -s "$tmp_repo/formula.diff" ]]; then
echo "::error::No formula changes generated. Nothing to publish."
exit 1
fi
{
echo "### Formula Diff"
echo '```diff'
cat "$tmp_repo/formula.diff"
echo '```'
} >> "$GITHUB_STEP_SUMMARY"
- name: Push branch and open Homebrew PR
if: ${{ inputs.dry_run == false }}
shell: bash
env:
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
run: |
set -euo pipefail
repo_dir="${{ steps.patch_formula.outputs.tmp_repo }}/homebrew-core"
branch_name="${{ steps.patch_formula.outputs.branch_name }}"
tag_version="${{ steps.release_meta.outputs.tag_version }}"
fork_owner="${BOT_FORK_REPO%%/*}"
bot_email="${BOT_EMAIL:-${fork_owner}@users.noreply.github.com}"
git -C "$repo_dir" config user.name "$fork_owner"
git -C "$repo_dir" config user.email "$bot_email"
git -C "$repo_dir" add "$FORMULA_PATH"
git -C "$repo_dir" commit -m "zeroclaw ${tag_version}"
if [[ -z "${GH_TOKEN:-}" ]]; then
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is missing."
exit 1
fi
gh auth setup-git
git -C "$repo_dir" push --set-upstream origin "$branch_name"
pr_title="zeroclaw ${tag_version}"
pr_body=$(cat <<EOF
Automated formula bump from ZeroClaw release workflow.
- Release tag: ${RELEASE_TAG}
- Source tarball: ${{ steps.release_meta.outputs.tarball_url }}
- Source sha256: ${{ steps.release_meta.outputs.tarball_sha }}
EOF
)
gh pr create \
--repo "$UPSTREAM_REPO" \
--base main \
--head "${fork_owner}:${branch_name}" \
--title "$pr_title" \
--body "$pr_body"
- name: Summary output
shell: bash
run: |
set -euo pipefail
if [[ "$DRY_RUN" == "true" ]]; then
echo "Dry run complete: formula diff generated, no push/PR performed."
else
echo "Publish complete: branch pushed and PR opened from bot fork."
fi
-435
View File
@@ -1,435 +0,0 @@
name: Pub Release
on:
push:
tags: ["v*"]
workflow_dispatch:
inputs:
release_ref:
description: "Git ref (branch, tag, or SHA) to build"
required: false
default: "main"
type: string
publish_release:
description: "Publish a GitHub release (false = verification build only)"
required: false
default: false
type: boolean
release_tag:
description: "Existing release tag (required when publish_release=true), e.g. v0.1.1"
required: false
default: ""
type: string
draft:
description: "Create release as draft (manual publish only)"
required: false
default: true
type: boolean
schedule:
# Weekly release-readiness verification on default branch (no publish)
- cron: "17 8 * * 1"
concurrency:
group: release-${{ github.ref || github.run_id }}
cancel-in-progress: false
permissions:
contents: write
packages: read
id-token: write # Required for cosign keyless signing via OIDC
env:
CARGO_TERM_COLOR: always
jobs:
prepare:
name: Prepare Release Context
runs-on: blacksmith-2vcpu-ubuntu-2404
outputs:
release_ref: ${{ steps.vars.outputs.release_ref }}
release_tag: ${{ steps.vars.outputs.release_tag }}
publish_release: ${{ steps.vars.outputs.publish_release }}
draft_release: ${{ steps.vars.outputs.draft_release }}
steps:
- name: Resolve release inputs
id: vars
shell: bash
run: |
set -euo pipefail
event_name="${GITHUB_EVENT_NAME}"
publish_release="false"
draft_release="false"
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
if [[ "$event_name" == "push" ]]; then
release_ref="${GITHUB_REF_NAME}"
release_tag="${GITHUB_REF_NAME}"
publish_release="true"
elif [[ "$event_name" == "workflow_dispatch" ]]; then
release_ref="${{ inputs.release_ref }}"
publish_release="${{ inputs.publish_release }}"
draft_release="${{ inputs.draft }}"
if [[ "$publish_release" == "true" ]]; then
release_tag="${{ inputs.release_tag }}"
if [[ -z "$release_tag" ]]; then
echo "::error::release_tag is required when publish_release=true"
exit 1
fi
release_ref="$release_tag"
else
release_tag="verify-${GITHUB_SHA::12}"
fi
else
# schedule
release_ref="main"
release_tag="verify-${GITHUB_SHA::12}"
fi
if [[ "$publish_release" == "true" ]]; then
if [[ ! "$release_tag" =~ $semver_pattern ]]; then
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])"
exit 1
fi
if ! git ls-remote --exit-code --tags "https://github.com/${GITHUB_REPOSITORY}.git" "refs/tags/${release_tag}" >/dev/null; then
echo "::error::Tag ${release_tag} does not exist on origin. Push the tag first, then rerun manual publish."
exit 1
fi
# Guardrail: release tags must resolve to commits already reachable from main.
tmp_repo="$(mktemp -d)"
trap 'rm -rf "$tmp_repo"' EXIT
git -C "$tmp_repo" init -q
git -C "$tmp_repo" remote add origin "https://github.com/${GITHUB_REPOSITORY}.git"
git -C "$tmp_repo" fetch --quiet --filter=blob:none origin main "refs/tags/${release_tag}:refs/tags/${release_tag}"
if ! git -C "$tmp_repo" merge-base --is-ancestor "refs/tags/${release_tag}" "origin/main"; then
echo "::error::Tag ${release_tag} is not reachable from origin/main. Release tags must be cut from main."
exit 1
fi
# Guardrail: release tag and Cargo package version must stay aligned.
tag_version="${release_tag#v}"
cargo_version="$(git -C "$tmp_repo" show "refs/tags/${release_tag}:Cargo.toml" | sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
if [[ -z "$cargo_version" ]]; then
echo "::error::Unable to read Cargo package version from ${release_tag}:Cargo.toml"
exit 1
fi
if [[ "$cargo_version" != "$tag_version" ]]; then
echo "::error::Tag ${release_tag} does not match Cargo.toml version (${cargo_version})."
echo "::error::Bump Cargo.toml version first, then create/publish the matching tag."
exit 1
fi
fi
{
echo "release_ref=${release_ref}"
echo "release_tag=${release_tag}"
echo "publish_release=${publish_release}"
echo "draft_release=${draft_release}"
} >> "$GITHUB_OUTPUT"
{
echo "### Release Context"
echo "- event: ${event_name}"
echo "- release_ref: ${release_ref}"
echo "- release_tag: ${release_tag}"
echo "- publish_release: ${publish_release}"
echo "- draft_release: ${draft_release}"
} >> "$GITHUB_STEP_SUMMARY"
build-release:
name: Build ${{ matrix.target }}
needs: [prepare]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: ubuntu-latest
target: armv7-unknown-linux-gnueabihf
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: gcc-arm-linux-gnueabihf
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
linker: arm-linux-gnueabihf-gcc
- os: ubuntu-latest
target: armv7-linux-androideabi
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
android_ndk: true
android_api: 21
- os: ubuntu-latest
target: aarch64-linux-android
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
android_ndk: true
android_api: 21
- os: macos-15-intel
target: x86_64-apple-darwin
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: macos-14
target: aarch64-apple-darwin
artifact: zeroclaw
archive_ext: tar.gz
cross_compiler: ""
linker_env: ""
linker: ""
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
archive_ext: zip
cross_compiler: ""
linker_env: ""
linker: ""
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
ref: ${{ needs.prepare.outputs.release_ref }}
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
if: runner.os != 'Windows'
- name: Install cross-compilation toolchain (Linux)
if: runner.os == 'Linux' && matrix.cross_compiler != ''
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Setup Android NDK
if: matrix.android_ndk
uses: nttld/setup-ndk@v1
id: setup-ndk
with:
ndk-version: r26d
add-to-path: true
- name: Configure Android toolchain
if: matrix.android_ndk
run: |
echo "Setting up Android NDK toolchain for ${{ matrix.target }}"
NDK_HOME="${{ steps.setup-ndk.outputs.ndk-path }}"
TOOLCHAIN="$NDK_HOME/toolchains/llvm/prebuilt/linux-x86_64/bin"
# Add to path for linker resolution
echo "$TOOLCHAIN" >> "$GITHUB_PATH"
# Set linker environment variables
if [[ "${{ matrix.target }}" == "armv7-linux-androideabi" ]]; then
echo "CARGO_TARGET_ARMV7_LINUX_ANDROIDEABI_LINKER=${TOOLCHAIN}/armv7a-linux-androideabi${{ matrix.android_api }}-clang" >> "$GITHUB_ENV"
elif [[ "${{ matrix.target }}" == "aarch64-linux-android" ]]; then
echo "CARGO_TARGET_AARCH64_LINUX_ANDROID_LINKER=${TOOLCHAIN}/aarch64-linux-android${{ matrix.android_api }}-clang" >> "$GITHUB_ENV"
fi
- name: Build release
shell: bash
env:
LINKER_ENV: ${{ matrix.linker_env }}
LINKER: ${{ matrix.linker }}
run: |
if [ -n "$LINKER_ENV" ] && [ -n "$LINKER" ]; then
echo "Using linker override: $LINKER_ENV=$LINKER"
export "$LINKER_ENV=$LINKER"
fi
cargo build --profile release-fast --locked --target ${{ matrix.target }}
- name: Check binary size (Unix)
if: runner.os != 'Windows'
run: bash scripts/ci/check_binary_size.sh "target/${{ matrix.target }}/release-fast/${{ matrix.artifact }}" "${{ matrix.target }}"
- name: Package (Unix)
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release-fast
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release-fast
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
- name: Upload artifact
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }}
retention-days: 7
verify-artifacts:
name: Verify Artifact Set
needs: [prepare, build-release]
runs-on: blacksmith-2vcpu-ubuntu-2404
steps:
- name: Download all artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
path: artifacts
- name: Validate expected archives
shell: bash
run: |
set -euo pipefail
expected=(
"zeroclaw-x86_64-unknown-linux-gnu.tar.gz"
"zeroclaw-aarch64-unknown-linux-gnu.tar.gz"
"zeroclaw-armv7-unknown-linux-gnueabihf.tar.gz"
"zeroclaw-armv7-linux-androideabi.tar.gz"
"zeroclaw-aarch64-linux-android.tar.gz"
"zeroclaw-x86_64-apple-darwin.tar.gz"
"zeroclaw-aarch64-apple-darwin.tar.gz"
"zeroclaw-x86_64-pc-windows-msvc.zip"
)
missing=0
for file in "${expected[@]}"; do
if ! find artifacts -type f -name "$file" -print -quit | grep -q .; then
echo "::error::Missing release archive: $file"
missing=1
fi
done
if [ "$missing" -ne 0 ]; then
exit 1
fi
echo "All expected release archives are present."
publish:
name: Publish Release
if: needs.prepare.outputs.publish_release == 'true'
needs: [prepare, verify-artifacts]
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 45
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
ref: ${{ needs.prepare.outputs.release_ref }}
- name: Download all artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
with:
path: artifacts
- name: Install syft
run: |
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
- name: Generate SBOM (CycloneDX)
run: |
syft dir:. --source-name zeroclaw -o cyclonedx-json=artifacts/zeroclaw.cdx.json -o spdx-json=artifacts/zeroclaw.spdx.json
{
echo "### SBOM Generated"
echo "- CycloneDX: zeroclaw.cdx.json"
echo "- SPDX: zeroclaw.spdx.json"
} >> "$GITHUB_STEP_SUMMARY"
- name: Attach license and notice files
run: |
cp LICENSE-APACHE artifacts/LICENSE-APACHE
cp LICENSE-MIT artifacts/LICENSE-MIT
cp NOTICE artifacts/NOTICE
- name: Generate SHA256 checksums
run: |
cd artifacts
find . -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.cdx.json' -o -name '*.spdx.json' -o -name 'LICENSE-APACHE' -o -name 'LICENSE-MIT' -o -name 'NOTICE' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
echo "Generated checksums:"
cat SHA256SUMS
- name: Install cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- name: Sign artifacts with cosign (keyless)
shell: bash
run: |
set -euo pipefail
while IFS= read -r -d '' file; do
cosign sign-blob --yes \
--bundle="${file}.sigstore.json" \
--output-signature="${file}.sig" \
--output-certificate="${file}.pem" \
"$file"
done < <(find artifacts -type f ! -name '*.sig' ! -name '*.pem' ! -name '*.sigstore.json' -print0)
- name: Verify GHCR release tag availability
shell: bash
env:
RELEASE_TAG: ${{ needs.prepare.outputs.release_tag }}
run: |
set -euo pipefail
repo="${GITHUB_REPOSITORY,,}"
manifest_url="https://ghcr.io/v2/${repo}/manifests/${RELEASE_TAG}"
accept_header="application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json"
max_attempts=75
sleep_seconds=20
for attempt in $(seq 1 "$max_attempts"); do
token_resp="$(curl -sS "https://ghcr.io/token?scope=repository:${repo}:pull" || true)"
token="$(echo "$token_resp" | sed -n 's/.*"token":"\([^"]*\)".*/\1/p')"
if [ -z "$token" ]; then
code="000"
else
code="$(curl -sS -o /tmp/ghcr-release-manifest.json -w "%{http_code}" \
-H "Authorization: Bearer ${token}" \
-H "Accept: ${accept_header}" \
"${manifest_url}" || true)"
fi
if [ "$code" = "200" ]; then
echo "GHCR release tag is available: ${repo}:${RELEASE_TAG}"
exit 0
fi
if [ "$attempt" -lt "$max_attempts" ]; then
echo "Waiting for GHCR tag ${repo}:${RELEASE_TAG} (attempt ${attempt}/${max_attempts}, HTTP ${code})..."
sleep "$sleep_seconds"
fi
done
echo "::error::GHCR tag ${repo}:${RELEASE_TAG} was not available before release publish timeout."
cat /tmp/ghcr-release-manifest.json || true
exit 1
- name: Create GitHub Release
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2
with:
tag_name: ${{ needs.prepare.outputs.release_tag }}
draft: ${{ needs.prepare.outputs.draft_release == 'true' }}
generate_release_notes: true
files: |
artifacts/**/*
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+290
View File
@@ -0,0 +1,290 @@
name: Release Beta
on:
push:
branches: [master]
concurrency:
group: release
cancel-in-progress: false
permissions:
contents: write
packages: write
env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
version:
name: Resolve Version
runs-on: ubuntu-latest
outputs:
version: ${{ steps.ver.outputs.version }}
tag: ${{ steps.ver.outputs.tag }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Compute beta version
id: ver
shell: bash
run: |
set -euo pipefail
base_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
beta_tag="v${base_version}-beta.${GITHUB_RUN_NUMBER}"
echo "version=${base_version}" >> "$GITHUB_OUTPUT"
echo "tag=${beta_tag}" >> "$GITHUB_OUTPUT"
echo "Beta release: ${beta_tag}"
release-notes:
name: Generate Release Notes
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
features: ${{ steps.notes.outputs.features }}
contributors: ${{ steps.notes.outputs.contributors }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build release notes
id: notes
shell: bash
run: |
set -euo pipefail
# Use a wider range — find the previous stable tag to capture all
# contributors across the full release cycle, not just one beta bump
PREV_TAG=$(git tag --sort=-creatordate \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
RANGE="HEAD"
else
RANGE="${PREV_TAG}..HEAD"
fi
# Extract features only (feat commits) — skip bug fixes for clean notes
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| while IFS= read -r line; do echo "- ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="- Incremental improvements and polish"
fi
# Collect ALL unique contributors: git authors + Co-Authored-By
GIT_AUTHORS=$(git log "$RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
# Merge, deduplicate, and filter out bots
ALL_CONTRIBUTORS=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| while IFS= read -r name; do echo "- ${name}"; done || true)
# Build release body
BODY=$(cat <<NOTES_EOF
## What's New
${FEATURES}
## Contributors
${ALL_CONTRIBUTORS}
---
*Full changelog: ${PREV_TAG}...HEAD*
NOTES_EOF
)
# Output multiline values
{
echo "body<<BODY_EOF"
echo "$BODY"
echo "BODY_EOF"
} >> "$GITHUB_OUTPUT"
{
echo "features<<FEAT_EOF"
echo "$FEATURES"
echo "FEAT_EOF"
} >> "$GITHUB_OUTPUT"
{
echo "contributors<<CONTRIB_EOF"
echo "$ALL_CONTRIBUTORS"
echo "CONTRIB_EOF"
} >> "$GITHUB_OUTPUT"
web:
name: Build Web Dashboard
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
build:
name: Build ${{ matrix.target }}
needs: [version, web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: macos-14
target: aarch64-apple-darwin
artifact: zeroclaw
ext: tar.gz
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
ext: zip
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- uses: actions/download-artifact@v4
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
retention-days: 7
publish:
name: Publish Beta Release
needs: [version, release-notes, build]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
pattern: zeroclaw-*
path: artifacts
- name: Generate checksums
run: |
cd artifacts
find . -type f \( -name '*.tar.gz' -o -name '*.zip' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
cat SHA256SUMS
- name: Create GitHub Release
uses: softprops/action-gh-release@5be0e66d93ac7ed76da52eca8bb058f665c3a5fe # v2.4.2
with:
tag_name: ${{ needs.version.outputs.tag }}
name: ${{ needs.version.outputs.tag }}
prerelease: true
body: ${{ needs.release-notes.outputs.notes }}
files: |
artifacts/**/*
install.sh
env:
GITHUB_TOKEN: ${{ secrets.RELEASE_TOKEN }}
- name: Trigger website redeploy
env:
PAT: ${{ secrets.WEBSITE_REPO_PAT }}
run: |
curl -fsSL -X POST \
-H "Authorization: token $PAT" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/zeroclaw-labs/zeroclaw-website/dispatches \
-d '{"event_type":"new-release","client_payload":{"install_script_url":"https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh"}}'
docker:
name: Push Docker Image
needs: [version, build]
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
+291
View File
@@ -0,0 +1,291 @@
name: Release Stable
on:
workflow_dispatch:
inputs:
version:
description: "Stable version to release (e.g. 0.2.0)"
required: true
type: string
concurrency:
group: promote-release
cancel-in-progress: false
permissions:
contents: write
packages: write
env:
CARGO_TERM_COLOR: always
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
validate:
name: Validate Version
runs-on: ubuntu-latest
outputs:
tag: ${{ steps.check.outputs.tag }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Validate semver and Cargo.toml match
id: check
shell: bash
run: |
set -euo pipefail
input_version="${{ inputs.version }}"
cargo_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
if [[ ! "$input_version" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "::error::Version must be semver (X.Y.Z). Got: ${input_version}"
exit 1
fi
if [[ "$cargo_version" != "$input_version" ]]; then
echo "::error::Cargo.toml version (${cargo_version}) does not match input (${input_version}). Bump Cargo.toml first."
exit 1
fi
tag="v${input_version}"
if git ls-remote --exit-code --tags origin "refs/tags/${tag}" >/dev/null 2>&1; then
echo "::error::Tag ${tag} already exists."
exit 1
fi
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
web:
name: Build Web Dashboard
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: npm
cache-dependency-path: web/package-lock.json
- name: Build web dashboard
run: cd web && npm ci && npm run build
- uses: actions/upload-artifact@v4
with:
name: web-dist
path: web/dist/
retention-days: 1
release-notes:
name: Generate Release Notes
runs-on: ubuntu-latest
outputs:
notes: ${{ steps.notes.outputs.body }}
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build release notes
id: notes
shell: bash
env:
INPUT_VERSION: ${{ inputs.version }}
run: |
set -euo pipefail
# Find the previous stable tag (exclude beta tags)
PREV_TAG=$(git tag --sort=-creatordate | grep -vE '\-beta\.' | grep -v "^v${INPUT_VERSION}$" | head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
RANGE="HEAD"
else
RANGE="${PREV_TAG}..HEAD"
fi
# Extract features only — skip bug fixes for clean release notes
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| while IFS= read -r line; do echo "- ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="- Incremental improvements and polish"
fi
# Collect ALL unique contributors: git authors + Co-Authored-By
GIT_AUTHORS=$(git log "$RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
# Merge, deduplicate, and filter out bots
ALL_CONTRIBUTORS=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
| while IFS= read -r name; do echo "- ${name}"; done || true)
BODY=$(cat <<NOTES_EOF
## What's New
${FEATURES}
## Contributors
${ALL_CONTRIBUTORS}
---
*Full changelog: ${PREV_TAG}...v${INPUT_VERSION}*
NOTES_EOF
)
{
echo "body<<BODY_EOF"
echo "$BODY"
echo "BODY_EOF"
} >> "$GITHUB_OUTPUT"
build:
name: Build ${{ matrix.target }}
needs: [validate, web]
runs-on: ${{ matrix.os }}
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
artifact: zeroclaw
ext: tar.gz
cross_compiler: gcc-aarch64-linux-gnu
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
linker: aarch64-linux-gnu-gcc
- os: macos-14
target: aarch64-apple-darwin
artifact: zeroclaw
ext: tar.gz
- os: windows-latest
target: x86_64-pc-windows-msvc
artifact: zeroclaw.exe
ext: zip
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
if: runner.os != 'Windows'
- uses: actions/download-artifact@v4
with:
name: web-dist
path: web/dist/
- name: Install cross compiler
if: matrix.cross_compiler
run: |
sudo apt-get update -qq
sudo apt-get install -y ${{ matrix.cross_compiler }}
- name: Build release
shell: bash
run: |
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
fi
cargo build --release --locked --target ${{ matrix.target }}
- name: Package (Unix)
if: runner.os != 'Windows'
run: |
cd target/${{ matrix.target }}/release
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- name: Package (Windows)
if: runner.os == 'Windows'
run: |
cd target/${{ matrix.target }}/release
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: zeroclaw-${{ matrix.target }}
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
retention-days: 14
publish:
name: Publish Stable Release
needs: [validate, release-notes, build]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
pattern: zeroclaw-*
path: artifacts
- name: Generate checksums
run: |
cd artifacts
find . -type f \( -name '*.tar.gz' -o -name '*.zip' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
cat SHA256SUMS
- name: Create GitHub Release
uses: softprops/action-gh-release@5be0e66d93ac7ed76da52eca8bb058f665c3a5fe # v2.4.2
with:
tag_name: ${{ needs.validate.outputs.tag }}
name: ${{ needs.validate.outputs.tag }}
prerelease: false
body: ${{ needs.release-notes.outputs.notes }}
files: |
artifacts/**/*
install.sh
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Trigger website redeploy
env:
PAT: ${{ secrets.WEBSITE_REPO_PAT }}
run: |
curl -fsSL -X POST \
-H "Authorization: token $PAT" \
-H "Accept: application/vnd.github+json" \
https://api.github.com/repos/zeroclaw-labs/zeroclaw-website/dispatches \
-d '{"event_type":"new-release","client_payload":{"install_script_url":"https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh"}}'
docker:
name: Push Docker Image
needs: [validate, build]
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
@@ -1,54 +0,0 @@
// Enforce ownership rules for root license files in PRs.
module.exports = async ({ github, context, core }) => {
const owner = context.repo.owner;
const repo = context.repo.repo;
const prNumber = context.payload.pull_request?.number;
const prAuthor = context.payload.pull_request?.user?.login?.toLowerCase() || "";
if (!prNumber) {
core.setFailed("Missing pull_request context.");
return;
}
const ownerAllowlist = ["willsarg"];
if (ownerAllowlist.length === 0) {
core.setFailed("License owner allowlist is empty.");
return;
}
const protectedFiles = new Set(["LICENSE-APACHE", "LICENSE-MIT"]);
const files = await github.paginate(github.rest.pulls.listFiles, {
owner,
repo,
pull_number: prNumber,
per_page: 100,
});
const changedProtectedFiles = files
.map((file) => file.filename)
.filter((name) => protectedFiles.has(name));
if (changedProtectedFiles.length === 0) {
core.info("No protected root license files changed in this PR.");
return;
}
core.info(`Protected license files changed:\n- ${changedProtectedFiles.join("\n- ")}`);
core.info(`Allowed license file editors: ${ownerAllowlist.join(", ")}`);
if (!prAuthor) {
core.setFailed("Unable to resolve PR author login.");
return;
}
if (!ownerAllowlist.includes(prAuthor)) {
core.setFailed(
`Root license files (${changedProtectedFiles.join(", ")}) can only be changed by ${ownerAllowlist.join(", ")}. PR author is @${prAuthor}.`,
);
return;
}
core.info(`License file edit authorized for PR author: @${prAuthor}`);
};
@@ -1,83 +0,0 @@
// Extracted from ci-run.yml step: Require owner approval for workflow file changes
module.exports = async ({ github, context, core }) => {
const owner = context.repo.owner;
const repo = context.repo.repo;
const prNumber = context.payload.pull_request?.number;
const prAuthor = context.payload.pull_request?.user?.login?.toLowerCase() || "";
if (!prNumber) {
core.setFailed("Missing pull_request context.");
return;
}
const baseOwners = ["theonlyhennygod", "willsarg", "chumyin"];
const configuredOwners = (process.env.WORKFLOW_OWNER_LOGINS || "")
.split(",")
.map((login) => login.trim().toLowerCase())
.filter(Boolean);
const ownerAllowlist = [...new Set([...baseOwners, ...configuredOwners])];
if (ownerAllowlist.length === 0) {
core.setFailed("Workflow owner allowlist is empty.");
return;
}
core.info(`Workflow owner allowlist: ${ownerAllowlist.join(", ")}`);
const files = await github.paginate(github.rest.pulls.listFiles, {
owner,
repo,
pull_number: prNumber,
per_page: 100,
});
const workflowFiles = files
.map((file) => file.filename)
.filter((name) => name.startsWith(".github/workflows/"));
if (workflowFiles.length === 0) {
core.info("No workflow files changed in this PR.");
return;
}
core.info(`Workflow files changed:\n- ${workflowFiles.join("\n- ")}`);
if (prAuthor && ownerAllowlist.includes(prAuthor)) {
core.info(`Workflow PR authored by allowlisted owner: @${prAuthor}`);
return;
}
const reviews = await github.paginate(github.rest.pulls.listReviews, {
owner,
repo,
pull_number: prNumber,
per_page: 100,
});
const latestReviewByUser = new Map();
for (const review of reviews) {
const login = review.user?.login;
if (!login) continue;
latestReviewByUser.set(login.toLowerCase(), review.state);
}
const approvedUsers = [...latestReviewByUser.entries()]
.filter(([, state]) => state === "APPROVED")
.map(([login]) => login);
if (approvedUsers.length === 0) {
core.setFailed("Workflow files changed but no approving review is present.");
return;
}
const ownerApprover = approvedUsers.find((login) => ownerAllowlist.includes(login));
if (!ownerApprover) {
core.setFailed(
`Workflow files changed. Approvals found (${approvedUsers.join(", ")}), but none match workflow owner allowlist.`,
);
return;
}
core.info(`Workflow owner approval present: @${ownerApprover}`);
};
@@ -1,90 +0,0 @@
// Post actionable lint failure summary as a PR comment.
// Used by the lint-feedback CI job via actions/github-script.
//
// Required environment variables:
// RUST_CHANGED — "true" if Rust files changed
// DOCS_CHANGED — "true" if docs files changed
// LINT_RESULT — result of the lint job
// LINT_DELTA_RESULT — result of the strict delta lint job
// DOCS_RESULT — result of the docs-quality job
module.exports = async ({ github, context, core }) => {
const owner = context.repo.owner;
const repo = context.repo.repo;
const issueNumber = context.payload.pull_request?.number;
if (!issueNumber) return;
const marker = "<!-- ci-lint-feedback -->";
const rustChanged = process.env.RUST_CHANGED === "true";
const docsChanged = process.env.DOCS_CHANGED === "true";
const lintResult = process.env.LINT_RESULT || "skipped";
const lintDeltaResult = process.env.LINT_DELTA_RESULT || "skipped";
const docsResult = process.env.DOCS_RESULT || "skipped";
const failures = [];
if (rustChanged && !["success", "skipped"].includes(lintResult)) {
failures.push("`Lint Gate (Format + Clippy)` failed.");
}
if (rustChanged && !["success", "skipped"].includes(lintDeltaResult)) {
failures.push("`Lint Gate (Strict Delta)` failed.");
}
if (docsChanged && !["success", "skipped"].includes(docsResult)) {
failures.push("`Docs Quality` failed.");
}
const comments = await github.paginate(github.rest.issues.listComments, {
owner,
repo,
issue_number: issueNumber,
per_page: 100,
});
const existing = comments.find((comment) => (comment.body || "").includes(marker));
if (failures.length === 0) {
if (existing) {
await github.rest.issues.deleteComment({
owner,
repo,
comment_id: existing.id,
});
}
core.info("No lint/docs gate failures. No feedback comment required.");
return;
}
const runUrl = `${context.serverUrl}/${owner}/${repo}/actions/runs/${context.runId}`;
const body = [
marker,
"### CI lint feedback",
"",
"This PR failed one or more fast lint/documentation gates:",
"",
...failures.map((item) => `- ${item}`),
"",
"Open the failing logs in this run:",
`- ${runUrl}`,
"",
"Local fix commands:",
"- `./scripts/ci/rust_quality_gate.sh`",
"- `./scripts/ci/rust_strict_delta_gate.sh`",
"- `./scripts/ci/docs_quality_gate.sh`",
"",
"After fixes, push a new commit and CI will re-run automatically.",
].join("\n");
if (existing) {
await github.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body,
});
} else {
await github.rest.issues.createComment({
owner,
repo,
issue_number: issueNumber,
body,
});
}
};
@@ -1,132 +0,0 @@
// Extracted from pr-auto-response.yml step: Apply contributor tier label for issue author
module.exports = async ({ github, context, core }) => {
const owner = context.repo.owner;
const repo = context.repo.repo;
const issue = context.payload.issue;
const pullRequest = context.payload.pull_request;
const target = issue ?? pullRequest;
async function loadContributorTierPolicy() {
const policyPath = process.env.LABEL_POLICY_PATH || ".github/label-policy.json";
const fallback = {
contributorTierColor: "2ED9FF",
contributorTierRules: [
{ label: "distinguished contributor", minMergedPRs: 50 },
{ label: "principal contributor", minMergedPRs: 20 },
{ label: "experienced contributor", minMergedPRs: 10 },
{ label: "trusted contributor", minMergedPRs: 5 },
],
};
try {
const { data } = await github.rest.repos.getContent({
owner,
repo,
path: policyPath,
ref: context.payload.repository?.default_branch || "main",
});
const json = JSON.parse(Buffer.from(data.content, "base64").toString("utf8"));
const contributorTierRules = (json.contributor_tiers || []).map((entry) => ({
label: String(entry.label || "").trim(),
minMergedPRs: Number(entry.min_merged_prs || 0),
}));
const contributorTierColor = String(json.contributor_tier_color || "").toUpperCase();
if (!contributorTierColor || contributorTierRules.length === 0) {
return fallback;
}
return { contributorTierColor, contributorTierRules };
} catch (error) {
core.warning(`failed to load ${policyPath}, using fallback policy: ${error.message}`);
return fallback;
}
}
const { contributorTierColor, contributorTierRules } = await loadContributorTierPolicy();
const contributorTierLabels = contributorTierRules.map((rule) => rule.label);
const managedContributorLabels = new Set(contributorTierLabels);
const action = context.payload.action;
const changedLabel = context.payload.label?.name;
if (!target) return;
if ((action === "labeled" || action === "unlabeled") && !managedContributorLabels.has(changedLabel)) {
return;
}
const author = target.user;
if (!author || author.type === "Bot") return;
function contributorTierDescription(rule) {
return `Contributor with ${rule.minMergedPRs}+ merged PRs.`;
}
async function ensureContributorTierLabels() {
for (const rule of contributorTierRules) {
const label = rule.label;
const expectedDescription = contributorTierDescription(rule);
try {
const { data: existing } = await github.rest.issues.getLabel({ owner, repo, name: label });
const currentColor = (existing.color || "").toUpperCase();
const currentDescription = (existing.description || "").trim();
if (currentColor !== contributorTierColor || currentDescription !== expectedDescription) {
await github.rest.issues.updateLabel({
owner,
repo,
name: label,
new_name: label,
color: contributorTierColor,
description: expectedDescription,
});
}
} catch (error) {
if (error.status !== 404) throw error;
await github.rest.issues.createLabel({
owner,
repo,
name: label,
color: contributorTierColor,
description: expectedDescription,
});
}
}
}
function selectContributorTier(mergedCount) {
const matchedTier = contributorTierRules.find((rule) => mergedCount >= rule.minMergedPRs);
return matchedTier ? matchedTier.label : null;
}
let contributorTierLabel = null;
try {
const { data: mergedSearch } = await github.rest.search.issuesAndPullRequests({
q: `repo:${owner}/${repo} is:pr is:merged author:${author.login}`,
per_page: 1,
});
const mergedCount = mergedSearch.total_count || 0;
contributorTierLabel = selectContributorTier(mergedCount);
} catch (error) {
core.warning(`failed to evaluate contributor tier status: ${error.message}`);
return;
}
await ensureContributorTierLabels();
const { data: currentLabels } = await github.rest.issues.listLabelsOnIssue({
owner,
repo,
issue_number: target.number,
});
const keepLabels = currentLabels
.map((label) => label.name)
.filter((label) => !contributorTierLabels.includes(label));
if (contributorTierLabel) {
keepLabels.push(contributorTierLabel);
}
await github.rest.issues.setLabels({
owner,
repo,
issue_number: target.number,
labels: [...new Set(keepLabels)],
});
};
@@ -1,94 +0,0 @@
// Extracted from pr-auto-response.yml step: Handle label-driven responses
module.exports = async ({ github, context, core }) => {
const label = context.payload.label?.name;
if (!label) return;
const issue = context.payload.issue;
const pullRequest = context.payload.pull_request;
const target = issue ?? pullRequest;
if (!target) return;
const isIssue = Boolean(issue);
const issueNumber = target.number;
const owner = context.repo.owner;
const repo = context.repo.repo;
const rules = [
{
label: "r:support",
close: true,
closeIssuesOnly: true,
closeReason: "not_planned",
message:
"This looks like a usage/support request. Please use README + docs first, then open a focused bug with repro details if behavior is incorrect.",
},
{
label: "r:needs-repro",
close: false,
message:
"Thanks for the report. Please add deterministic repro steps, exact environment, and redacted logs so maintainers can triage quickly.",
},
{
label: "invalid",
close: true,
closeIssuesOnly: true,
closeReason: "not_planned",
message:
"Closing as invalid based on current information. If this is still relevant, open a new issue with updated evidence and reproducible steps.",
},
{
label: "duplicate",
close: true,
closeIssuesOnly: true,
closeReason: "not_planned",
message:
"Closing as duplicate. Please continue discussion in the canonical linked issue/PR.",
},
];
const rule = rules.find((entry) => entry.label === label);
if (!rule) return;
const marker = `<!-- auto-response:${rule.label} -->`;
const comments = await github.paginate(github.rest.issues.listComments, {
owner,
repo,
issue_number: issueNumber,
per_page: 100,
});
const alreadyCommented = comments.some((comment) =>
(comment.body || "").includes(marker)
);
if (!alreadyCommented) {
await github.rest.issues.createComment({
owner,
repo,
issue_number: issueNumber,
body: `${rule.message}\n\n${marker}`,
});
}
if (!rule.close) return;
if (rule.closeIssuesOnly && !isIssue) return;
if (target.state === "closed") return;
if (isIssue) {
await github.rest.issues.update({
owner,
repo,
issue_number: issueNumber,
state: "closed",
state_reason: rule.closeReason || "not_planned",
});
} else {
await github.rest.issues.update({
owner,
repo,
issue_number: issueNumber,
state: "closed",
});
}
};
@@ -1,161 +0,0 @@
// Extracted from pr-check-status.yml step: Nudge PRs that need rebase or CI refresh
module.exports = async ({ github, context, core }) => {
const staleHours = Number(process.env.STALE_HOURS || "48");
const ignoreLabels = new Set(["no-stale", "stale", "maintainer", "no-pr-hygiene"]);
const marker = "<!-- pr-hygiene-nudge -->";
const owner = context.repo.owner;
const repo = context.repo.repo;
const openPrs = await github.paginate(github.rest.pulls.list, {
owner,
repo,
state: "open",
per_page: 100,
});
const activePrs = openPrs.filter((pr) => {
if (pr.draft) {
return false;
}
const labels = new Set((pr.labels || []).map((label) => label.name));
return ![...ignoreLabels].some((label) => labels.has(label));
});
core.info(`Scanning ${activePrs.length} open PR(s) for hygiene nudges.`);
let nudged = 0;
let skipped = 0;
for (const pr of activePrs) {
const { data: headCommit } = await github.rest.repos.getCommit({
owner,
repo,
ref: pr.head.sha,
});
const headCommitAt =
headCommit.commit?.committer?.date || headCommit.commit?.author?.date;
if (!headCommitAt) {
skipped += 1;
core.info(`#${pr.number}: missing head commit timestamp, skipping.`);
continue;
}
const ageHours = (Date.now() - new Date(headCommitAt).getTime()) / 3600000;
if (ageHours < staleHours) {
skipped += 1;
continue;
}
const { data: prDetail } = await github.rest.pulls.get({
owner,
repo,
pull_number: pr.number,
});
const isBehindBase = prDetail.mergeable_state === "behind";
const { data: checkRunsData } = await github.rest.checks.listForRef({
owner,
repo,
ref: pr.head.sha,
per_page: 100,
});
const ciGateRuns = (checkRunsData.check_runs || [])
.filter((run) => run.name === "CI Required Gate")
.sort((a, b) => {
const aTime = new Date(a.started_at || a.completed_at || a.created_at).getTime();
const bTime = new Date(b.started_at || b.completed_at || b.created_at).getTime();
return bTime - aTime;
});
let ciState = "missing";
if (ciGateRuns.length > 0) {
const latest = ciGateRuns[0];
if (latest.status !== "completed") {
ciState = "in_progress";
} else if (["success", "neutral", "skipped"].includes(latest.conclusion || "")) {
ciState = "success";
} else {
ciState = String(latest.conclusion || "failure");
}
}
const ciMissing = ciState === "missing";
const ciFailing = !["success", "in_progress", "missing"].includes(ciState);
if (!isBehindBase && !ciMissing && !ciFailing) {
skipped += 1;
continue;
}
const reasons = [];
if (isBehindBase) {
reasons.push("- Branch is behind `main` (please rebase or merge the latest base branch).");
}
if (ciMissing) {
reasons.push("- No `CI Required Gate` run was found for the current head commit.");
}
if (ciFailing) {
reasons.push(`- Latest \`CI Required Gate\` result is \`${ciState}\`.`);
}
const shortSha = pr.head.sha.slice(0, 12);
const body = [
marker,
`Hi @${pr.user.login}, friendly automation nudge from PR hygiene.`,
"",
`This PR has had no new commits for **${Math.floor(ageHours)}h** and still needs an update before merge:`,
"",
...reasons,
"",
"### Recommended next steps",
"1. Rebase your branch on `main`.",
"2. Push the updated branch and re-run checks (or use **Re-run failed jobs**).",
"3. Post fresh validation output in this PR thread.",
"",
"Maintainers: apply `no-stale` to opt out for accepted-but-blocked work.",
`Head SHA: \`${shortSha}\``,
].join("\n");
const { data: comments } = await github.rest.issues.listComments({
owner,
repo,
issue_number: pr.number,
per_page: 100,
});
const existing = comments.find(
(comment) => comment.user?.type === "Bot" && comment.body?.includes(marker),
);
if (existing) {
if (existing.body === body) {
skipped += 1;
continue;
}
await github.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body,
});
} else {
await github.rest.issues.createComment({
owner,
repo,
issue_number: pr.number,
body,
});
}
nudged += 1;
core.info(`#${pr.number}: hygiene nudge posted/updated.`);
}
core.info(`Done. Nudged=${nudged}, skipped=${skipped}`);
};
@@ -1,204 +0,0 @@
// Run safe intake checks for PR events and maintain a single sticky comment.
// Used by .github/workflows/pr-intake-checks.yml via actions/github-script.
module.exports = async ({ github, context, core }) => {
const owner = context.repo.owner;
const repo = context.repo.repo;
const pr = context.payload.pull_request;
if (!pr) return;
const prAuthor = (pr.user?.login || "").toLowerCase();
const prBaseRef = pr.base?.ref || "";
const marker = "<!-- pr-intake-checks -->";
const legacyMarker = "<!-- pr-intake-sanity -->";
const requiredSections = [
"## Summary",
"## Validation Evidence (required)",
"## Security Impact (required)",
"## Privacy and Data Hygiene (required)",
"## Rollback Plan (required)",
];
const body = pr.body || "";
const missingSections = requiredSections.filter((section) => !body.includes(section));
const missingFields = [];
const requiredFieldChecks = [
["summary problem", /- Problem:\s*\S+/m],
["summary why it matters", /- Why it matters:\s*\S+/m],
["summary what changed", /- What changed:\s*\S+/m],
["validation commands", /Commands and result summary:\s*[\s\S]*```/m],
["security risk/mitigation", /- New permissions\/capabilities\?\s*\(`Yes\/No`\):\s*\S+/m],
["privacy status", /- Data-hygiene status\s*\(`pass\|needs-follow-up`\):\s*\S+/m],
["rollback plan", /- Fast rollback command\/path:\s*\S+/m],
];
for (const [name, pattern] of requiredFieldChecks) {
if (!pattern.test(body)) {
missingFields.push(name);
}
}
const files = await github.paginate(github.rest.pulls.listFiles, {
owner,
repo,
pull_number: pr.number,
per_page: 100,
});
const formatWarnings = [];
const dangerousProblems = [];
for (const file of files) {
const patch = file.patch || "";
if (!patch) continue;
const lines = patch.split("\n");
for (let idx = 0; idx < lines.length; idx += 1) {
const line = lines[idx];
if (!line.startsWith("+") || line.startsWith("+++")) continue;
const added = line.slice(1);
const lineNo = idx + 1;
if (/\t/.test(added)) {
formatWarnings.push(`${file.filename}:patch#${lineNo} contains tab characters`);
}
if (/[ \t]+$/.test(added)) {
formatWarnings.push(`${file.filename}:patch#${lineNo} contains trailing whitespace`);
}
if (/^(<<<<<<<|=======|>>>>>>>)/.test(added)) {
dangerousProblems.push(`${file.filename}:patch#${lineNo} contains merge conflict markers`);
}
}
}
const workflowFilesChanged = files
.map((file) => file.filename)
.filter((name) => name.startsWith(".github/workflows/"));
const advisoryFindings = [];
const blockingFindings = [];
if (missingSections.length > 0) {
advisoryFindings.push(`Missing required PR template sections: ${missingSections.join(", ")}`);
}
if (missingFields.length > 0) {
advisoryFindings.push(`Incomplete required PR template fields: ${missingFields.join(", ")}`);
}
if (formatWarnings.length > 0) {
advisoryFindings.push(`Formatting issues in added lines (${formatWarnings.length})`);
}
if (dangerousProblems.length > 0) {
blockingFindings.push(`Dangerous patch markers found (${dangerousProblems.length})`);
}
const promotionAuthorAllowlist = new Set(["willsarg", "theonlyhennygod"]);
const shouldRetargetToDev =
prBaseRef === "main" && !promotionAuthorAllowlist.has(prAuthor);
if (shouldRetargetToDev) {
advisoryFindings.push(
"This PR targets `main`, but normal contributions must target `dev`. Retarget this PR to `dev` unless this is an authorized promotion PR.",
);
}
const comments = await github.paginate(github.rest.issues.listComments, {
owner,
repo,
issue_number: pr.number,
per_page: 100,
});
const existing = comments.find((comment) => {
const body = comment.body || "";
return body.includes(marker) || body.includes(legacyMarker);
});
if (advisoryFindings.length === 0 && blockingFindings.length === 0) {
if (existing) {
await github.rest.issues.deleteComment({
owner,
repo,
comment_id: existing.id,
});
}
core.info("PR intake sanity checks passed.");
return;
}
const runUrl = `${context.serverUrl}/${owner}/${repo}/actions/runs/${context.runId}`;
const advisoryDetails = [];
if (formatWarnings.length > 0) {
advisoryDetails.push(...formatWarnings.slice(0, 20).map((entry) => `- ${entry}`));
if (formatWarnings.length > 20) {
advisoryDetails.push(`- ...and ${formatWarnings.length - 20} more issue(s)`);
}
}
const blockingDetails = [];
if (dangerousProblems.length > 0) {
blockingDetails.push(...dangerousProblems.slice(0, 20).map((entry) => `- ${entry}`));
if (dangerousProblems.length > 20) {
blockingDetails.push(`- ...and ${dangerousProblems.length - 20} more issue(s)`);
}
}
const isBlocking = blockingFindings.length > 0;
const ownerApprovalNote = workflowFilesChanged.length > 0
? [
"",
"Workflow files changed in this PR:",
...workflowFilesChanged.map((name) => `- \`${name}\``),
"",
"Reminder: workflow changes require owner approval via `CI Required Gate`.",
].join("\n")
: "";
const commentBody = [
marker,
isBlocking
? "### PR intake checks failed (blocking)"
: "### PR intake checks found warnings (non-blocking)",
"",
isBlocking
? "Fast safe checks found blocking safety issues:"
: "Fast safe checks found advisory issues. CI lint/test/build gates still enforce merge quality.",
...(blockingFindings.length > 0 ? blockingFindings.map((entry) => `- ${entry}`) : []),
...(advisoryFindings.length > 0 ? advisoryFindings.map((entry) => `- ${entry}`) : []),
"",
"Action items:",
"1. Complete required PR template sections/fields.",
"2. Remove tabs, trailing whitespace, and merge conflict markers from added lines.",
"3. Re-run local checks before pushing:",
" - `./scripts/ci/rust_quality_gate.sh`",
" - `./scripts/ci/rust_strict_delta_gate.sh`",
" - `./scripts/ci/docs_quality_gate.sh`",
...(shouldRetargetToDev
? ["4. Retarget this PR base branch from `main` to `dev`."]
: []),
"",
`Run logs: ${runUrl}`,
"",
"Detected blocking line issues (sample):",
...(blockingDetails.length > 0 ? blockingDetails : ["- none"]),
"",
"Detected advisory line issues (sample):",
...(advisoryDetails.length > 0 ? advisoryDetails : ["- none"]),
ownerApprovalNote,
].join("\n");
if (existing) {
await github.rest.issues.updateComment({
owner,
repo,
comment_id: existing.id,
body: commentBody,
});
} else {
await github.rest.issues.createComment({
owner,
repo,
issue_number: pr.number,
body: commentBody,
});
}
if (isBlocking) {
core.setFailed("PR intake sanity checks found blocking issues. See sticky comment for details.");
return;
}
core.info("PR intake sanity checks found advisory issues only.");
};
-805
View File
@@ -1,805 +0,0 @@
// Apply managed PR labels (size/risk/path/module/contributor tiers).
// Extracted from pr-labeler workflow inline github-script for maintainability.
module.exports = async ({ github, context, core }) => {
const pr = context.payload.pull_request;
const owner = context.repo.owner;
const repo = context.repo.repo;
const action = context.payload.action;
const changedLabel = context.payload.label?.name;
const sizeLabels = ["size: XS", "size: S", "size: M", "size: L", "size: XL"];
const computedRiskLabels = ["risk: low", "risk: medium", "risk: high"];
const manualRiskOverrideLabel = "risk: manual";
const managedEnforcedLabels = new Set([
...sizeLabels,
manualRiskOverrideLabel,
...computedRiskLabels,
]);
if ((action === "labeled" || action === "unlabeled") && !managedEnforcedLabels.has(changedLabel)) {
core.info(`skip non-size/risk label event: ${changedLabel || "unknown"}`);
return;
}
async function loadContributorTierPolicy() {
const policyPath = process.env.LABEL_POLICY_PATH || ".github/label-policy.json";
const fallback = {
contributorTierColor: "2ED9FF",
contributorTierRules: [
{ label: "distinguished contributor", minMergedPRs: 50 },
{ label: "principal contributor", minMergedPRs: 20 },
{ label: "experienced contributor", minMergedPRs: 10 },
{ label: "trusted contributor", minMergedPRs: 5 },
],
};
try {
const { data } = await github.rest.repos.getContent({
owner,
repo,
path: policyPath,
ref: context.payload.repository?.default_branch || "main",
});
const json = JSON.parse(Buffer.from(data.content, "base64").toString("utf8"));
const contributorTierRules = (json.contributor_tiers || []).map((entry) => ({
label: String(entry.label || "").trim(),
minMergedPRs: Number(entry.min_merged_prs || 0),
}));
const contributorTierColor = String(json.contributor_tier_color || "").toUpperCase();
if (!contributorTierColor || contributorTierRules.length === 0) {
return fallback;
}
return { contributorTierColor, contributorTierRules };
} catch (error) {
core.warning(`failed to load ${policyPath}, using fallback policy: ${error.message}`);
return fallback;
}
}
const { contributorTierColor, contributorTierRules } = await loadContributorTierPolicy();
const contributorTierLabels = contributorTierRules.map((rule) => rule.label);
const managedPathLabels = [
"docs",
"dependencies",
"ci",
"core",
"agent",
"channel",
"config",
"cron",
"daemon",
"doctor",
"gateway",
"health",
"heartbeat",
"integration",
"memory",
"observability",
"onboard",
"provider",
"runtime",
"security",
"service",
"skillforge",
"skills",
"tool",
"tunnel",
"tests",
"scripts",
"dev",
];
const managedPathLabelSet = new Set(managedPathLabels);
const moduleNamespaceRules = [
{ root: "src/agent/", prefix: "agent", coreEntries: new Set(["mod.rs"]) },
{ root: "src/channels/", prefix: "channel", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/config/", prefix: "config", coreEntries: new Set(["mod.rs", "schema.rs"]) },
{ root: "src/cron/", prefix: "cron", coreEntries: new Set(["mod.rs"]) },
{ root: "src/daemon/", prefix: "daemon", coreEntries: new Set(["mod.rs"]) },
{ root: "src/doctor/", prefix: "doctor", coreEntries: new Set(["mod.rs"]) },
{ root: "src/gateway/", prefix: "gateway", coreEntries: new Set(["mod.rs"]) },
{ root: "src/health/", prefix: "health", coreEntries: new Set(["mod.rs"]) },
{ root: "src/heartbeat/", prefix: "heartbeat", coreEntries: new Set(["mod.rs"]) },
{ root: "src/integrations/", prefix: "integration", coreEntries: new Set(["mod.rs", "registry.rs"]) },
{ root: "src/memory/", prefix: "memory", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/observability/", prefix: "observability", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/onboard/", prefix: "onboard", coreEntries: new Set(["mod.rs"]) },
{ root: "src/providers/", prefix: "provider", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/runtime/", prefix: "runtime", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/security/", prefix: "security", coreEntries: new Set(["mod.rs"]) },
{ root: "src/service/", prefix: "service", coreEntries: new Set(["mod.rs"]) },
{ root: "src/skillforge/", prefix: "skillforge", coreEntries: new Set(["mod.rs"]) },
{ root: "src/skills/", prefix: "skills", coreEntries: new Set(["mod.rs"]) },
{ root: "src/tools/", prefix: "tool", coreEntries: new Set(["mod.rs", "traits.rs"]) },
{ root: "src/tunnel/", prefix: "tunnel", coreEntries: new Set(["mod.rs"]) },
];
const managedModulePrefixes = [...new Set(moduleNamespaceRules.map((rule) => `${rule.prefix}:`))];
const orderedOtherLabelStyles = [
{ label: "health", color: "8EC9B8" },
{ label: "tool", color: "7FC4B6" },
{ label: "agent", color: "86C4A2" },
{ label: "memory", color: "8FCB99" },
{ label: "channel", color: "7EB6F2" },
{ label: "service", color: "95C7B6" },
{ label: "integration", color: "8DC9AE" },
{ label: "tunnel", color: "9FC8B3" },
{ label: "config", color: "AABCD0" },
{ label: "observability", color: "84C9D0" },
{ label: "docs", color: "8FBBE0" },
{ label: "dev", color: "B9C1CC" },
{ label: "tests", color: "9DC8C7" },
{ label: "skills", color: "BFC89B" },
{ label: "skillforge", color: "C9C39B" },
{ label: "provider", color: "958DF0" },
{ label: "runtime", color: "A3ADD8" },
{ label: "heartbeat", color: "C0C88D" },
{ label: "daemon", color: "C8C498" },
{ label: "doctor", color: "C1CF9D" },
{ label: "onboard", color: "D2BF86" },
{ label: "cron", color: "D2B490" },
{ label: "ci", color: "AEB4CE" },
{ label: "dependencies", color: "9FB1DE" },
{ label: "gateway", color: "B5A8E5" },
{ label: "security", color: "E58D85" },
{ label: "core", color: "C8A99B" },
{ label: "scripts", color: "C9B49F" },
];
const otherLabelDisplayOrder = orderedOtherLabelStyles.map((entry) => entry.label);
const modulePrefixSet = new Set(moduleNamespaceRules.map((rule) => rule.prefix));
const modulePrefixPriority = otherLabelDisplayOrder.filter((label) => modulePrefixSet.has(label));
const pathLabelPriority = [...otherLabelDisplayOrder];
const riskDisplayOrder = ["risk: high", "risk: medium", "risk: low", "risk: manual"];
const sizeDisplayOrder = ["size: XS", "size: S", "size: M", "size: L", "size: XL"];
const contributorDisplayOrder = [
"distinguished contributor",
"principal contributor",
"experienced contributor",
"trusted contributor",
];
const modulePrefixPriorityIndex = new Map(
modulePrefixPriority.map((prefix, index) => [prefix, index])
);
const pathLabelPriorityIndex = new Map(
pathLabelPriority.map((label, index) => [label, index])
);
const riskPriorityIndex = new Map(
riskDisplayOrder.map((label, index) => [label, index])
);
const sizePriorityIndex = new Map(
sizeDisplayOrder.map((label, index) => [label, index])
);
const contributorPriorityIndex = new Map(
contributorDisplayOrder.map((label, index) => [label, index])
);
const otherLabelColors = Object.fromEntries(
orderedOtherLabelStyles.map((entry) => [entry.label, entry.color])
);
const staticLabelColors = {
"size: XS": "E7CDD3",
"size: S": "E1BEC7",
"size: M": "DBB0BB",
"size: L": "D4A2AF",
"size: XL": "CE94A4",
"risk: low": "97D3A6",
"risk: medium": "E4C47B",
"risk: high": "E98E88",
"risk: manual": "B7A4E0",
...otherLabelColors,
};
const staticLabelDescriptions = {
"size: XS": "Auto size: <=80 non-doc changed lines.",
"size: S": "Auto size: 81-250 non-doc changed lines.",
"size: M": "Auto size: 251-500 non-doc changed lines.",
"size: L": "Auto size: 501-1000 non-doc changed lines.",
"size: XL": "Auto size: >1000 non-doc changed lines.",
"risk: low": "Auto risk: docs/chore-only paths.",
"risk: medium": "Auto risk: src/** or dependency/config changes.",
"risk: high": "Auto risk: security/runtime/gateway/tools/workflows.",
"risk: manual": "Maintainer override: keep selected risk label.",
docs: "Auto scope: docs/markdown/template files changed.",
dependencies: "Auto scope: dependency manifest/lock/policy changed.",
ci: "Auto scope: CI/workflow/hook files changed.",
core: "Auto scope: root src/*.rs files changed.",
agent: "Auto scope: src/agent/** changed.",
channel: "Auto scope: src/channels/** changed.",
config: "Auto scope: src/config/** changed.",
cron: "Auto scope: src/cron/** changed.",
daemon: "Auto scope: src/daemon/** changed.",
doctor: "Auto scope: src/doctor/** changed.",
gateway: "Auto scope: src/gateway/** changed.",
health: "Auto scope: src/health/** changed.",
heartbeat: "Auto scope: src/heartbeat/** changed.",
integration: "Auto scope: src/integrations/** changed.",
memory: "Auto scope: src/memory/** changed.",
observability: "Auto scope: src/observability/** changed.",
onboard: "Auto scope: src/onboard/** changed.",
provider: "Auto scope: src/providers/** changed.",
runtime: "Auto scope: src/runtime/** changed.",
security: "Auto scope: src/security/** changed.",
service: "Auto scope: src/service/** changed.",
skillforge: "Auto scope: src/skillforge/** changed.",
skills: "Auto scope: src/skills/** changed.",
tool: "Auto scope: src/tools/** changed.",
tunnel: "Auto scope: src/tunnel/** changed.",
tests: "Auto scope: tests/** changed.",
scripts: "Auto scope: scripts/** changed.",
dev: "Auto scope: dev/** changed.",
};
for (const label of contributorTierLabels) {
staticLabelColors[label] = contributorTierColor;
const rule = contributorTierRules.find((entry) => entry.label === label);
if (rule) {
staticLabelDescriptions[label] = `Contributor with ${rule.minMergedPRs}+ merged PRs.`;
}
}
const modulePrefixColors = Object.fromEntries(
modulePrefixPriority.map((prefix) => [
`${prefix}:`,
otherLabelColors[prefix] || "BFDADC",
])
);
const providerKeywordHints = [
"deepseek",
"moonshot",
"kimi",
"qwen",
"mistral",
"doubao",
"baichuan",
"yi",
"siliconflow",
"vertex",
"azure",
"perplexity",
"venice",
"vercel",
"cloudflare",
"synthetic",
"opencode",
"zai",
"glm",
"minimax",
"bedrock",
"qianfan",
"groq",
"together",
"fireworks",
"novita",
"cohere",
"openai",
"openrouter",
"anthropic",
"gemini",
"ollama",
];
const channelKeywordHints = [
"telegram",
"discord",
"slack",
"whatsapp",
"matrix",
"irc",
"imessage",
"email",
"cli",
];
function isDocsLike(path) {
return (
path.startsWith("docs/") ||
path.endsWith(".md") ||
path.endsWith(".mdx") ||
path === "LICENSE" ||
path === ".markdownlint-cli2.yaml" ||
path === ".github/pull_request_template.md" ||
path.startsWith(".github/ISSUE_TEMPLATE/")
);
}
function normalizeLabelSegment(segment) {
return (segment || "")
.toLowerCase()
.replace(/\.rs$/g, "")
.replace(/[^a-z0-9_-]+/g, "-")
.replace(/^[-_]+|[-_]+$/g, "")
.slice(0, 40);
}
function containsKeyword(text, keyword) {
const escaped = keyword.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
const pattern = new RegExp(`(^|[^a-z0-9_])${escaped}([^a-z0-9_]|$)`, "i");
return pattern.test(text);
}
function formatModuleLabel(prefix, segment) {
return `${prefix}: ${segment}`;
}
function parseModuleLabel(label) {
if (typeof label !== "string") return null;
const match = label.match(/^([^:]+):\s*(.+)$/);
if (!match) return null;
const prefix = match[1].trim().toLowerCase();
const segment = (match[2] || "").trim().toLowerCase();
if (!prefix || !segment) return null;
return { prefix, segment };
}
function sortByPriority(labels, priorityIndex) {
return [...new Set(labels)].sort((left, right) => {
const leftPriority = priorityIndex.has(left) ? priorityIndex.get(left) : Number.MAX_SAFE_INTEGER;
const rightPriority = priorityIndex.has(right)
? priorityIndex.get(right)
: Number.MAX_SAFE_INTEGER;
if (leftPriority !== rightPriority) return leftPriority - rightPriority;
return left.localeCompare(right);
});
}
function sortModuleLabels(labels) {
return [...new Set(labels)].sort((left, right) => {
const leftParsed = parseModuleLabel(left);
const rightParsed = parseModuleLabel(right);
if (!leftParsed || !rightParsed) return left.localeCompare(right);
const leftPrefixPriority = modulePrefixPriorityIndex.has(leftParsed.prefix)
? modulePrefixPriorityIndex.get(leftParsed.prefix)
: Number.MAX_SAFE_INTEGER;
const rightPrefixPriority = modulePrefixPriorityIndex.has(rightParsed.prefix)
? modulePrefixPriorityIndex.get(rightParsed.prefix)
: Number.MAX_SAFE_INTEGER;
if (leftPrefixPriority !== rightPrefixPriority) {
return leftPrefixPriority - rightPrefixPriority;
}
if (leftParsed.prefix !== rightParsed.prefix) {
return leftParsed.prefix.localeCompare(rightParsed.prefix);
}
const leftIsCore = leftParsed.segment === "core";
const rightIsCore = rightParsed.segment === "core";
if (leftIsCore !== rightIsCore) return leftIsCore ? 1 : -1;
return leftParsed.segment.localeCompare(rightParsed.segment);
});
}
function refineModuleLabels(rawLabels) {
const refined = new Set(rawLabels);
const segmentsByPrefix = new Map();
for (const label of rawLabels) {
const parsed = parseModuleLabel(label);
if (!parsed) continue;
if (!segmentsByPrefix.has(parsed.prefix)) {
segmentsByPrefix.set(parsed.prefix, new Set());
}
segmentsByPrefix.get(parsed.prefix).add(parsed.segment);
}
for (const [prefix, segments] of segmentsByPrefix) {
const hasSpecificSegment = [...segments].some((segment) => segment !== "core");
if (hasSpecificSegment) {
refined.delete(formatModuleLabel(prefix, "core"));
}
}
return refined;
}
function compactModuleLabels(labels) {
const groupedSegments = new Map();
const compactedModuleLabels = new Set();
const forcePathPrefixes = new Set();
for (const label of labels) {
const parsed = parseModuleLabel(label);
if (!parsed) {
compactedModuleLabels.add(label);
continue;
}
if (!groupedSegments.has(parsed.prefix)) {
groupedSegments.set(parsed.prefix, new Set());
}
groupedSegments.get(parsed.prefix).add(parsed.segment);
}
for (const [prefix, segments] of groupedSegments) {
const uniqueSegments = [...new Set([...segments].filter(Boolean))];
if (uniqueSegments.length === 0) continue;
if (uniqueSegments.length === 1) {
compactedModuleLabels.add(formatModuleLabel(prefix, uniqueSegments[0]));
} else {
forcePathPrefixes.add(prefix);
}
}
return {
moduleLabels: compactedModuleLabels,
forcePathPrefixes,
};
}
function colorForLabel(label) {
if (staticLabelColors[label]) return staticLabelColors[label];
const matchedPrefix = Object.keys(modulePrefixColors).find((prefix) => label.startsWith(prefix));
if (matchedPrefix) return modulePrefixColors[matchedPrefix];
return "BFDADC";
}
function descriptionForLabel(label) {
if (staticLabelDescriptions[label]) return staticLabelDescriptions[label];
const parsed = parseModuleLabel(label);
if (parsed) {
if (parsed.segment === "core") {
return `Auto module: ${parsed.prefix} core files changed.`;
}
return `Auto module: ${parsed.prefix}/${parsed.segment} changed.`;
}
return "Auto-managed label.";
}
async function ensureLabel(name, existing = null) {
const expectedColor = colorForLabel(name);
const expectedDescription = descriptionForLabel(name);
try {
const current = existing || (await github.rest.issues.getLabel({ owner, repo, name })).data;
const currentColor = (current.color || "").toUpperCase();
const currentDescription = (current.description || "").trim();
if (currentColor !== expectedColor || currentDescription !== expectedDescription) {
await github.rest.issues.updateLabel({
owner,
repo,
name,
new_name: name,
color: expectedColor,
description: expectedDescription,
});
}
} catch (error) {
if (error.status !== 404) throw error;
await github.rest.issues.createLabel({
owner,
repo,
name,
color: expectedColor,
description: expectedDescription,
});
}
}
function isManagedLabel(label) {
if (label === manualRiskOverrideLabel) return true;
if (sizeLabels.includes(label) || computedRiskLabels.includes(label)) return true;
if (managedPathLabelSet.has(label)) return true;
if (contributorTierLabels.includes(label)) return true;
if (managedModulePrefixes.some((prefix) => label.startsWith(prefix))) return true;
return false;
}
async function ensureManagedRepoLabelsMetadata() {
const repoLabels = await github.paginate(github.rest.issues.listLabelsForRepo, {
owner,
repo,
per_page: 100,
});
for (const existingLabel of repoLabels) {
const labelName = existingLabel.name || "";
if (!isManagedLabel(labelName)) continue;
await ensureLabel(labelName, existingLabel);
}
}
function selectContributorTier(mergedCount) {
const matchedTier = contributorTierRules.find((rule) => mergedCount >= rule.minMergedPRs);
return matchedTier ? matchedTier.label : null;
}
if (context.eventName === "workflow_dispatch") {
const mode = (context.payload.inputs?.mode || "audit").toLowerCase();
const shouldRepair = mode === "repair";
const repoLabels = await github.paginate(github.rest.issues.listLabelsForRepo, {
owner,
repo,
per_page: 100,
});
let managedScanned = 0;
const drifts = [];
for (const existingLabel of repoLabels) {
const labelName = existingLabel.name || "";
if (!isManagedLabel(labelName)) continue;
managedScanned += 1;
const expectedColor = colorForLabel(labelName);
const expectedDescription = descriptionForLabel(labelName);
const currentColor = (existingLabel.color || "").toUpperCase();
const currentDescription = (existingLabel.description || "").trim();
if (currentColor !== expectedColor || currentDescription !== expectedDescription) {
drifts.push({
name: labelName,
currentColor,
expectedColor,
currentDescription,
expectedDescription,
});
if (shouldRepair) {
await ensureLabel(labelName, existingLabel);
}
}
}
core.summary
.addHeading("Managed Label Governance", 2)
.addRaw(`Mode: ${shouldRepair ? "repair" : "audit"}`)
.addEOL()
.addRaw(`Managed labels scanned: ${managedScanned}`)
.addEOL()
.addRaw(`Drifts found: ${drifts.length}`)
.addEOL();
if (drifts.length > 0) {
const sample = drifts.slice(0, 30).map((entry) => [
entry.name,
`${entry.currentColor} -> ${entry.expectedColor}`,
`${entry.currentDescription || "(blank)"} -> ${entry.expectedDescription}`,
]);
core.summary.addTable([
[{ data: "Label", header: true }, { data: "Color", header: true }, { data: "Description", header: true }],
...sample,
]);
if (drifts.length > sample.length) {
core.summary
.addRaw(`Additional drifts not shown: ${drifts.length - sample.length}`)
.addEOL();
}
}
await core.summary.write();
if (!shouldRepair && drifts.length > 0) {
core.info(`Managed-label metadata drifts detected: ${drifts.length}. Re-run with mode=repair to auto-fix.`);
} else if (shouldRepair) {
core.info(`Managed-label metadata repair applied to ${drifts.length} labels.`);
} else {
core.info("No managed-label metadata drift detected.");
}
return;
}
const files = await github.paginate(github.rest.pulls.listFiles, {
owner,
repo,
pull_number: pr.number,
per_page: 100,
});
const detectedModuleLabels = new Set();
for (const file of files) {
const path = (file.filename || "").toLowerCase();
for (const rule of moduleNamespaceRules) {
if (!path.startsWith(rule.root)) continue;
const relative = path.slice(rule.root.length);
if (!relative) continue;
const first = relative.split("/")[0];
const firstStem = first.endsWith(".rs") ? first.slice(0, -3) : first;
let segment = firstStem;
if (rule.coreEntries.has(first) || rule.coreEntries.has(firstStem)) {
segment = "core";
}
segment = normalizeLabelSegment(segment);
if (!segment) continue;
detectedModuleLabels.add(formatModuleLabel(rule.prefix, segment));
}
}
const providerRelevantFiles = files.filter((file) => {
const path = file.filename || "";
return (
path.startsWith("src/providers/") ||
path.startsWith("src/integrations/") ||
path.startsWith("src/onboard/") ||
path.startsWith("src/config/")
);
});
if (providerRelevantFiles.length > 0) {
const searchableText = [
pr.title || "",
pr.body || "",
...providerRelevantFiles.map((file) => file.filename || ""),
...providerRelevantFiles.map((file) => file.patch || ""),
]
.join("\n")
.toLowerCase();
for (const keyword of providerKeywordHints) {
if (containsKeyword(searchableText, keyword)) {
detectedModuleLabels.add(formatModuleLabel("provider", keyword));
}
}
}
const channelRelevantFiles = files.filter((file) => {
const path = file.filename || "";
return (
path.startsWith("src/channels/") ||
path.startsWith("src/onboard/") ||
path.startsWith("src/config/")
);
});
if (channelRelevantFiles.length > 0) {
const searchableText = [
pr.title || "",
pr.body || "",
...channelRelevantFiles.map((file) => file.filename || ""),
...channelRelevantFiles.map((file) => file.patch || ""),
]
.join("\n")
.toLowerCase();
for (const keyword of channelKeywordHints) {
if (containsKeyword(searchableText, keyword)) {
detectedModuleLabels.add(formatModuleLabel("channel", keyword));
}
}
}
const refinedModuleLabels = refineModuleLabels(detectedModuleLabels);
const compactedModuleState = compactModuleLabels(refinedModuleLabels);
const selectedModuleLabels = compactedModuleState.moduleLabels;
const forcePathPrefixes = compactedModuleState.forcePathPrefixes;
const modulePrefixesWithLabels = new Set(
[...selectedModuleLabels]
.map((label) => parseModuleLabel(label)?.prefix)
.filter(Boolean)
);
const { data: currentLabels } = await github.rest.issues.listLabelsOnIssue({
owner,
repo,
issue_number: pr.number,
});
const currentLabelNames = currentLabels.map((label) => label.name);
const currentPathLabels = currentLabelNames.filter((label) => managedPathLabelSet.has(label));
const candidatePathLabels = new Set([...currentPathLabels, ...forcePathPrefixes]);
const dedupedPathLabels = [...candidatePathLabels].filter((label) => {
if (label === "core") return true;
if (forcePathPrefixes.has(label)) return true;
return !modulePrefixesWithLabels.has(label);
});
const excludedLockfiles = new Set(["Cargo.lock"]);
const changedLines = files.reduce((total, file) => {
const path = file.filename || "";
if (isDocsLike(path) || excludedLockfiles.has(path)) {
return total;
}
return total + (file.additions || 0) + (file.deletions || 0);
}, 0);
let sizeLabel = "size: XL";
if (changedLines <= 80) sizeLabel = "size: XS";
else if (changedLines <= 250) sizeLabel = "size: S";
else if (changedLines <= 500) sizeLabel = "size: M";
else if (changedLines <= 1000) sizeLabel = "size: L";
const hasHighRiskPath = files.some((file) => {
const path = file.filename || "";
return (
path.startsWith("src/security/") ||
path.startsWith("src/runtime/") ||
path.startsWith("src/gateway/") ||
path.startsWith("src/tools/") ||
path.startsWith(".github/workflows/")
);
});
const hasMediumRiskPath = files.some((file) => {
const path = file.filename || "";
return (
path.startsWith("src/") ||
path === "Cargo.toml" ||
path === "Cargo.lock" ||
path === "deny.toml" ||
path.startsWith(".githooks/")
);
});
let riskLabel = "risk: low";
if (hasHighRiskPath) {
riskLabel = "risk: high";
} else if (hasMediumRiskPath) {
riskLabel = "risk: medium";
}
await ensureManagedRepoLabelsMetadata();
const labelsToEnsure = new Set([
...sizeLabels,
...computedRiskLabels,
manualRiskOverrideLabel,
...managedPathLabels,
...contributorTierLabels,
...selectedModuleLabels,
]);
for (const label of labelsToEnsure) {
await ensureLabel(label);
}
let contributorTierLabel = null;
const authorLogin = pr.user?.login;
if (authorLogin && pr.user?.type !== "Bot") {
try {
const { data: mergedSearch } = await github.rest.search.issuesAndPullRequests({
q: `repo:${owner}/${repo} is:pr is:merged author:${authorLogin}`,
per_page: 1,
});
const mergedCount = mergedSearch.total_count || 0;
contributorTierLabel = selectContributorTier(mergedCount);
} catch (error) {
core.warning(`failed to compute contributor tier label: ${error.message}`);
}
}
const hasManualRiskOverride = currentLabelNames.includes(manualRiskOverrideLabel);
const keepNonManagedLabels = currentLabelNames.filter((label) => {
if (label === manualRiskOverrideLabel) return true;
if (contributorTierLabels.includes(label)) return false;
if (sizeLabels.includes(label) || computedRiskLabels.includes(label)) return false;
if (managedPathLabelSet.has(label)) return false;
if (managedModulePrefixes.some((prefix) => label.startsWith(prefix))) return false;
return true;
});
const manualRiskSelection =
currentLabelNames.find((label) => computedRiskLabels.includes(label)) || riskLabel;
const moduleLabelList = sortModuleLabels([...selectedModuleLabels]);
const contributorLabelList = contributorTierLabel ? [contributorTierLabel] : [];
const selectedRiskLabels = hasManualRiskOverride
? sortByPriority([manualRiskSelection, manualRiskOverrideLabel], riskPriorityIndex)
: sortByPriority([riskLabel], riskPriorityIndex);
const selectedSizeLabels = sortByPriority([sizeLabel], sizePriorityIndex);
const sortedContributorLabels = sortByPriority(contributorLabelList, contributorPriorityIndex);
const sortedPathLabels = sortByPriority(dedupedPathLabels, pathLabelPriorityIndex);
const sortedKeepNonManagedLabels = [...new Set(keepNonManagedLabels)].sort((left, right) =>
left.localeCompare(right)
);
const nextLabels = [
...new Set([
...selectedRiskLabels,
...selectedSizeLabels,
...sortedContributorLabels,
...moduleLabelList,
...sortedPathLabels,
...sortedKeepNonManagedLabels,
]),
];
await github.rest.issues.setLabels({
owner,
repo,
issue_number: pr.number,
labels: nextLabels,
});
};
@@ -1,57 +0,0 @@
// Extracted from test-benchmarks.yml step: Post benchmark summary on PR
module.exports = async ({ github, context, core }) => {
const fs = require('fs');
const output = fs.readFileSync('benchmark_output.txt', 'utf8');
// Extract Criterion result lines
const lines = output.split('\n').filter(l =>
l.includes('time:') || l.includes('change:') || l.includes('Performance')
);
if (lines.length === 0) {
core.info('No benchmark results to post.');
return;
}
const body = [
'## 📊 Benchmark Results',
'',
'```',
lines.join('\n'),
'```',
'',
'<details><summary>Full output</summary>',
'',
'```',
output.substring(0, 60000),
'```',
'</details>',
].join('\n');
// Find and update or create comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
});
const marker = '## 📊 Benchmark Results';
const existing = comments.find(c => c.body && c.body.startsWith(marker));
if (existing) {
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: existing.id,
body,
});
} else {
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body,
});
}
};
-57
View File
@@ -1,57 +0,0 @@
name: Sec Audit
on:
push:
branches: [dev, main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
pull_request:
branches: [dev, main]
paths:
- "Cargo.toml"
- "Cargo.lock"
- "src/**"
- "crates/**"
- "deny.toml"
schedule:
- cron: "0 6 * * 1" # Weekly on Monday 6am UTC
concurrency:
group: security-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions:
contents: read
security-events: write
actions: read
checks: write
env:
CARGO_TERM_COLOR: always
jobs:
audit:
name: Security Audit
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 20
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: rustsec/audit-check@69366f33c96575abad1ee0dba8212993eecbe998 # v2.0.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
deny:
name: License & Supply Chain
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 20
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: EmbarkStudios/cargo-deny-action@3fd3802e88374d3fe9159b834c7714ec57d6c979 # v2
with:
command: check advisories licenses sources
-39
View File
@@ -1,39 +0,0 @@
name: Sec CodeQL
on:
schedule:
- cron: "0 6 * * 1" # Weekly Monday 6am UTC
workflow_dispatch:
concurrency:
group: codeql-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
security-events: write
actions: read
jobs:
codeql:
name: CodeQL Analysis
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Initialize CodeQL
uses: github/codeql-action/init@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4
with:
languages: rust
config-file: ./.github/codeql/codeql-config.yml
- name: Set up Rust
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
- name: Build
run: cargo build --workspace --all-targets
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4
-185
View File
@@ -1,185 +0,0 @@
name: Sec Vorpal Reviewdog
on:
workflow_dispatch:
inputs:
scan_scope:
description: "File selection mode when source_path is empty"
required: true
type: choice
default: changed
options:
- changed
- all
base_ref:
description: "Base branch/ref for changed diff mode"
required: true
type: string
default: main
source_path:
description: "Optional comma-separated file paths to scan (overrides scan_scope)"
required: false
type: string
include_tests:
description: "Include test/fixture files in scan selection"
required: true
type: choice
default: "false"
options:
- "false"
- "true"
folders_to_ignore:
description: "Optional comma-separated path prefixes to ignore"
required: false
type: string
default: target,node_modules,web/dist,.venv,venv
reporter:
description: "Reviewdog reporter mode"
required: true
type: choice
default: github-pr-check
options:
- github-pr-check
- github-pr-review
filter_mode:
description: "Reviewdog filter mode"
required: true
type: choice
default: file
options:
- added
- diff_context
- file
- nofilter
level:
description: "Reviewdog severity level"
required: true
type: choice
default: error
options:
- info
- warning
- error
fail_on_error:
description: "Fail workflow when Vorpal reports findings"
required: true
type: choice
default: "false"
options:
- "false"
- "true"
reviewdog_flags:
description: "Optional extra reviewdog flags"
required: false
type: string
concurrency:
group: sec-vorpal-reviewdog-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
checks: write
pull-requests: write
jobs:
vorpal:
name: Vorpal Reviewdog Scan
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 20
steps:
- name: Checkout
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Resolve source paths
id: sources
shell: bash
env:
INPUT_SOURCE_PATH: ${{ inputs.source_path }}
INPUT_SCAN_SCOPE: ${{ inputs.scan_scope }}
INPUT_BASE_REF: ${{ inputs.base_ref }}
INPUT_INCLUDE_TESTS: ${{ inputs.include_tests }}
run: |
set -euo pipefail
strip_space() {
local value="$1"
value="${value//$'\n'/}"
value="${value//$'\r'/}"
value="${value// /}"
echo "$value"
}
source_override="$(strip_space "${INPUT_SOURCE_PATH}")"
if [ -n "${source_override}" ]; then
normalized="$(echo "${INPUT_SOURCE_PATH}" | tr '\n' ',' | sed -E 's/[[:space:]]+//g; s/,+/,/g; s/^,|,$//g')"
if [ -n "${normalized}" ]; then
{
echo "scan=true"
echo "source_path=${normalized}"
echo "selection=manual"
} >> "${GITHUB_OUTPUT}"
exit 0
fi
fi
include_ext='\.(py|js|jsx|ts|tsx)$'
exclude_paths='^(target/|node_modules/|web/node_modules/|dist/|web/dist/|\.venv/|venv/)'
exclude_tests='(^|/)(test|tests|__tests__|fixtures|mocks|examples)/|(^|/)test_helpers/|(_test\.py$)|(^|/)test_.*\.py$|(\.spec\.(ts|tsx|js|jsx)$)|(\.test\.(ts|tsx|js|jsx)$)'
if [ "${INPUT_SCAN_SCOPE}" = "all" ]; then
candidate_files="$(git ls-files)"
else
base_ref="${INPUT_BASE_REF#refs/heads/}"
base_ref="${base_ref#origin/}"
if git fetch --no-tags --depth=1 origin "${base_ref}" >/dev/null 2>&1; then
if merge_base="$(git merge-base HEAD "origin/${base_ref}" 2>/dev/null)"; then
candidate_files="$(git diff --name-only --diff-filter=ACMR "${merge_base}"...HEAD)"
else
echo "Unable to resolve merge-base for origin/${base_ref}; falling back to tracked files."
candidate_files="$(git ls-files)"
fi
else
echo "Unable to fetch origin/${base_ref}; falling back to tracked files."
candidate_files="$(git ls-files)"
fi
fi
source_files="$(printf '%s\n' "${candidate_files}" | sed '/^$/d' | grep -E "${include_ext}" | grep -Ev "${exclude_paths}" || true)"
if [ "${INPUT_INCLUDE_TESTS}" != "true" ] && [ -n "${source_files}" ]; then
source_files="$(printf '%s\n' "${source_files}" | grep -Ev "${exclude_tests}" || true)"
fi
if [ -z "${source_files}" ]; then
{
echo "scan=false"
echo "source_path="
echo "selection=none"
} >> "${GITHUB_OUTPUT}"
exit 0
fi
source_path="$(printf '%s\n' "${source_files}" | paste -sd, -)"
{
echo "scan=true"
echo "source_path=${source_path}"
echo "selection=auto-${INPUT_SCAN_SCOPE}"
} >> "${GITHUB_OUTPUT}"
- name: No supported files to scan
if: steps.sources.outputs.scan != 'true'
shell: bash
run: |
echo "No supported files selected for Vorpal scan (extensions: .py .js .jsx .ts .tsx)."
- name: Run Vorpal with reviewdog
if: steps.sources.outputs.scan == 'true'
uses: Checkmarx/vorpal-reviewdog-github-action@8cc292f337a2f1dea581b4f4bd73852e7becb50d # v1.2.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
source_path: ${{ steps.sources.outputs.source_path }}
folders_to_ignore: ${{ inputs.folders_to_ignore }}
reporter: ${{ inputs.reporter }}
filter_mode: ${{ inputs.filter_mode }}
level: ${{ inputs.level }}
fail_on_error: ${{ inputs.fail_on_error }}
reviewdog_flags: ${{ inputs.reviewdog_flags }}
-116
View File
@@ -1,116 +0,0 @@
name: Sync Contributors
on:
workflow_dispatch:
schedule:
# Run every Sunday at 00:00 UTC
- cron: '0 0 * * 0'
concurrency:
group: update-notice-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
pull-requests: write
jobs:
update-notice:
name: Update NOTICE with new contributors
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Fetch contributors
id: contributors
env:
GH_TOKEN: ${{ github.token }}
run: |
# Fetch all contributors (excluding bots)
gh api \
--paginate \
"repos/${{ github.repository }}/contributors" \
--jq '.[] | select(.type != "Bot") | .login' > /tmp/contributors_raw.txt
# Sort alphabetically and filter
sort -f < /tmp/contributors_raw.txt > contributors.txt
# Count contributors
count=$(wc -l < contributors.txt | tr -d ' ')
echo "count=$count" >> "$GITHUB_OUTPUT"
- name: Generate new NOTICE file
run: |
cat > NOTICE << 'EOF'
ZeroClaw
Copyright 2025 ZeroClaw Labs
This product includes software developed at ZeroClaw Labs (https://github.com/zeroclaw-labs).
Contributors
============
The following individuals have contributed to ZeroClaw:
EOF
# Append contributors in alphabetical order
sed 's/^/- /' contributors.txt >> NOTICE
# Add third-party dependencies section
cat >> NOTICE << 'EOF'
Third-Party Dependencies
=========================
This project uses the following third-party libraries and components,
each licensed under their respective terms:
See Cargo.lock for a complete list of dependencies and their licenses.
EOF
- name: Check if NOTICE changed
id: check_diff
run: |
if git diff --quiet NOTICE; then
echo "changed=false" >> "$GITHUB_OUTPUT"
else
echo "changed=true" >> "$GITHUB_OUTPUT"
fi
- name: Create Pull Request
if: steps.check_diff.outputs.changed == 'true'
env:
GH_TOKEN: ${{ github.token }}
COUNT: ${{ steps.contributors.outputs.count }}
run: |
branch_name="auto/update-notice-$(date +%Y%m%d)"
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git checkout -b "$branch_name"
git add NOTICE
git commit -m "chore(notice): update contributor list"
git push origin "$branch_name"
gh pr create \
--title "chore(notice): update contributor list" \
--body "Auto-generated update to NOTICE file with $COUNT contributors." \
--label "chore" \
--label "docs" \
--draft || true
- name: Summary
run: |
echo "## NOTICE Update Results" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "${{ steps.check_diff.outputs.changed }}" = "true" ]; then
echo "✅ PR created to update NOTICE" >> "$GITHUB_STEP_SUMMARY"
else
echo "✓ NOTICE file is up to date" >> "$GITHUB_STEP_SUMMARY"
fi
echo "" >> "$GITHUB_STEP_SUMMARY"
echo "**Contributors:** ${{ steps.contributors.outputs.count }}" >> "$GITHUB_STEP_SUMMARY"
-50
View File
@@ -1,50 +0,0 @@
name: Test Benchmarks
on:
schedule:
- cron: "0 3 * * 1" # Weekly Monday 3am UTC
workflow_dispatch:
concurrency:
group: bench-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
env:
CARGO_TERM_COLOR: always
jobs:
benchmarks:
name: Criterion Benchmarks
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run benchmarks
run: cargo bench --locked 2>&1 | tee benchmark_output.txt
- name: Upload benchmark results
if: always()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
with:
name: benchmark-results
path: |
target/criterion/
benchmark_output.txt
retention-days: 7
- name: Post benchmark summary on PR
if: github.event_name == 'pull_request'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const script = require('./.github/workflows/scripts/test_benchmarks_pr_comment.js');
await script({ github, context, core });
-30
View File
@@ -1,30 +0,0 @@
name: Test E2E
on:
push:
branches: [dev, main]
workflow_dispatch:
concurrency:
group: e2e-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
jobs:
integration-tests:
name: Integration / E2E Tests
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 30
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run integration / E2E tests
run: cargo test --test agent_e2e --locked --verbose
-72
View File
@@ -1,72 +0,0 @@
name: Test Fuzz
on:
schedule:
- cron: "0 2 * * 0" # Weekly Sunday 2am UTC
workflow_dispatch:
inputs:
fuzz_seconds:
description: "Seconds to run each fuzz target"
required: false
default: "300"
concurrency:
group: fuzz-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
issues: write
env:
CARGO_TERM_COLOR: always
jobs:
fuzz:
name: Fuzz (${{ matrix.target }})
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
target:
- fuzz_config_parse
- fuzz_tool_params
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: nightly
components: llvm-tools-preview
- name: Install cargo-fuzz
run: cargo install cargo-fuzz --locked
- name: Run fuzz target
run: |
SECONDS="${{ github.event.inputs.fuzz_seconds || '300' }}"
echo "Fuzzing ${{ matrix.target }} for ${SECONDS}s"
cargo +nightly fuzz run ${{ matrix.target }} -- \
-max_total_time="${SECONDS}" \
-max_len=4096
continue-on-error: true
id: fuzz
- name: Upload crash artifacts
if: failure() || steps.fuzz.outcome == 'failure'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
with:
name: fuzz-crashes-${{ matrix.target }}
path: fuzz/artifacts/${{ matrix.target }}/
retention-days: 30
if-no-files-found: ignore
- name: Report fuzz results
run: |
echo "### Fuzz: ${{ matrix.target }}" >> "$GITHUB_STEP_SUMMARY"
if [ "${{ steps.fuzz.outcome }}" = "failure" ]; then
echo "- :x: Crashes found — see artifacts" >> "$GITHUB_STEP_SUMMARY"
else
echo "- :white_check_mark: No crashes found" >> "$GITHUB_STEP_SUMMARY"
fi
-62
View File
@@ -1,62 +0,0 @@
name: Test Rust Build
on:
workflow_call:
inputs:
run_command:
description: "Shell command(s) to execute."
required: true
type: string
timeout_minutes:
description: "Job timeout in minutes."
required: false
default: 20
type: number
toolchain:
description: "Rust toolchain channel/version."
required: false
default: "stable"
type: string
components:
description: "Optional rustup components."
required: false
default: ""
type: string
targets:
description: "Optional rustup targets."
required: false
default: ""
type: string
use_cache:
description: "Whether to enable rust-cache."
required: false
default: true
type: boolean
permissions:
contents: read
jobs:
run:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: ${{ inputs.timeout_minutes }}
steps:
- name: Checkout repository
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Setup Rust toolchain
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: ${{ inputs.toolchain }}
components: ${{ inputs.components }}
targets: ${{ inputs.targets }}
- name: Restore Rust cache
if: inputs.use_cache
uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
- name: Run command
shell: bash
run: |
set -euo pipefail
${{ inputs.run_command }}
+223
View File
@@ -0,0 +1,223 @@
name: Tweet Release
on:
release:
types: [published]
workflow_dispatch:
inputs:
tweet_text:
description: "Custom tweet text (include emojis, keep it punchy)"
required: true
type: string
image_url:
description: "Optional image URL to attach (png/jpg)"
required: false
type: string
jobs:
tweet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Build tweet text
id: tweet
shell: bash
env:
RELEASE_TAG: ${{ github.event.release.tag_name || '' }}
RELEASE_URL: ${{ github.event.release.html_url || '' }}
MANUAL_TEXT: ${{ inputs.tweet_text || '' }}
run: |
set -euo pipefail
if [ -n "$MANUAL_TEXT" ]; then
TWEET="$MANUAL_TEXT"
else
# Use a wider range — find the previous stable tag to capture all
# contributors across the full release cycle, not just one beta bump
PREV_TAG=$(git tag --sort=-creatordate \
| grep -v "^${RELEASE_TAG}$" \
| grep -vE '\-beta\.' \
| head -1 || echo "")
if [ -z "$PREV_TAG" ]; then
RANGE="HEAD"
else
RANGE="${PREV_TAG}..${RELEASE_TAG}"
fi
# Extract features only — no bug fixes, keep it clean and concise
FEATURES=$(git log "$RANGE" --pretty=format:"%s" --no-merges \
| grep -iE '^feat(\(|:)' \
| sed 's/^feat(\([^)]*\)): /\1: /' \
| sed 's/^feat: //' \
| sed 's/ (#[0-9]*)$//' \
| sort -uf \
| head -4 \
| while IFS= read -r line; do echo "🚀 ${line}"; done || true)
if [ -z "$FEATURES" ]; then
FEATURES="🚀 Incremental improvements and polish"
fi
# Collect ALL unique contributors: git authors + Co-Authored-By
# Filter out bots and service accounts
GIT_AUTHORS=$(git log "$RANGE" --pretty=format:"%an" --no-merges | sort -uf || true)
CO_AUTHORS=$(git log "$RANGE" --pretty=format:"%b" --no-merges \
| grep -ioE 'Co-Authored-By: *[^<]+' \
| sed 's/Co-Authored-By: *//i' \
| sed 's/ *$//' \
| sort -uf || true)
ALL_NAMES=$(printf "%s\n%s" "$GIT_AUTHORS" "$CO_AUTHORS" \
| sort -uf \
| grep -v '^$' \
| grep -viE '\[bot\]$|^dependabot|^github-actions|^copilot|^ZeroClaw Bot|^ZeroClaw Runner|^ZeroClaw Agent|^blacksmith' \
|| true)
TOTAL_COUNT=$(echo "$ALL_NAMES" | grep -c . || echo "0")
# Show up to 6 names, then "+ N more" if there are extras
SHOWN=$(echo "$ALL_NAMES" | head -6 | paste -sd ', ' -)
if [ "$TOTAL_COUNT" -gt 6 ]; then
EXTRA=$((TOTAL_COUNT - 6))
CONTRIBUTORS="${SHOWN} + ${EXTRA} more"
else
CONTRIBUTORS="$SHOWN"
fi
# Build the tweet — punchy, features-first, all contributors credited
TWEET=$(printf "🦀 ZeroClaw %s\n\n%s\n\n🙌 Contributors: %s\n\n🔗 %s" \
"$RELEASE_TAG" "$FEATURES" "$CONTRIBUTORS" "$RELEASE_URL")
fi
# Append release URL if not already present and we have one
if [ -n "$RELEASE_URL" ] && ! echo "$TWEET" | grep -q "$RELEASE_URL"; then
TWEET=$(printf "%s\n\n%s" "$TWEET" "$RELEASE_URL")
fi
# Truncate to 280 chars if needed — trim features first, keep contributors
if [ ${#TWEET} -gt 280 ]; then
TWEET="${TWEET:0:277}..."
fi
echo "--- Tweet preview ---"
echo "$TWEET"
echo "--- ${#TWEET} chars ---"
{
echo "text<<TWEET_EOF"
echo "$TWEET"
echo "TWEET_EOF"
} >> "$GITHUB_OUTPUT"
- name: Post to X
shell: bash
env:
TWITTER_CONSUMER_KEY: ${{ secrets.TWITTER_CONSUMER_API_KEY }}
TWITTER_CONSUMER_SECRET: ${{ secrets.TWITTER_CONSUMER_API_SECRET_KEY }}
TWITTER_ACCESS_TOKEN: ${{ secrets.TWITTER_ACCESS_TOKEN }}
TWITTER_ACCESS_TOKEN_SECRET: ${{ secrets.TWITTER_ACCESS_TOKEN_SECRET }}
TWEET_TEXT: ${{ steps.tweet.outputs.text }}
IMAGE_URL: ${{ inputs.image_url || '' }}
run: |
set -euo pipefail
# Skip if Twitter secrets are not configured
if [ -z "$TWITTER_CONSUMER_KEY" ] || [ -z "$TWITTER_ACCESS_TOKEN" ]; then
echo "::warning::Twitter secrets not configured — skipping tweet"
exit 0
fi
pip install requests requests-oauthlib --quiet
python3 - <<'PYEOF'
import os, sys, time
from requests_oauthlib import OAuth1Session
consumer_key = os.environ["TWITTER_CONSUMER_KEY"]
consumer_secret = os.environ["TWITTER_CONSUMER_SECRET"]
access_token = os.environ["TWITTER_ACCESS_TOKEN"]
access_token_secret = os.environ["TWITTER_ACCESS_TOKEN_SECRET"]
tweet_text = os.environ["TWEET_TEXT"]
image_url = os.environ.get("IMAGE_URL", "")
oauth = OAuth1Session(
consumer_key,
client_secret=consumer_secret,
resource_owner_key=access_token,
resource_owner_secret=access_token_secret,
)
media_id = None
# Upload image if provided
if image_url:
import requests
print(f"Downloading image: {image_url}")
img_resp = requests.get(image_url, timeout=30)
img_resp.raise_for_status()
content_type = img_resp.headers.get("content-type", "image/png")
# X media upload (v1.1 chunked INIT/APPEND/FINALIZE)
init_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={
"command": "INIT",
"total_bytes": len(img_resp.content),
"media_type": content_type,
},
)
if init_resp.status_code != 202:
print(f"Media INIT failed: {init_resp.status_code} {init_resp.text}", file=sys.stderr)
sys.exit(1)
media_id = init_resp.json()["media_id_string"]
append_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={"command": "APPEND", "media_id": media_id, "segment_index": 0},
files={"media_data": img_resp.content},
)
if append_resp.status_code not in (200, 204):
print(f"Media APPEND failed: {append_resp.status_code} {append_resp.text}", file=sys.stderr)
sys.exit(1)
fin_resp = oauth.post(
"https://upload.twitter.com/1.1/media/upload.json",
data={"command": "FINALIZE", "media_id": media_id},
)
if fin_resp.status_code not in (200, 201):
print(f"Media FINALIZE failed: {fin_resp.status_code} {fin_resp.text}", file=sys.stderr)
sys.exit(1)
# Wait for processing if needed
state = fin_resp.json().get("processing_info", {}).get("state")
while state == "pending" or state == "in_progress":
wait = fin_resp.json().get("processing_info", {}).get("check_after_secs", 2)
time.sleep(wait)
status_resp = oauth.get(
"https://upload.twitter.com/1.1/media/upload.json",
params={"command": "STATUS", "media_id": media_id},
)
state = status_resp.json().get("processing_info", {}).get("state")
fin_resp = status_resp
print(f"Image uploaded: media_id={media_id}")
# Post tweet
payload = {"text": tweet_text}
if media_id:
payload["media"] = {"media_ids": [media_id]}
resp = oauth.post("https://api.x.com/2/tweets", json=payload)
if resp.status_code == 201:
data = resp.json()
tweet_id = data["data"]["id"]
print(f"Tweet posted: https://x.com/zeroclawlabs/status/{tweet_id}")
else:
print(f"Failed to post tweet: {resp.status_code}", file=sys.stderr)
print(resp.text, file=sys.stderr)
sys.exit(1)
PYEOF
-64
View File
@@ -1,64 +0,0 @@
name: Workflow Sanity
on:
pull_request:
paths:
- ".github/workflows/**"
- ".github/*.yml"
- ".github/*.yaml"
push:
paths:
- ".github/workflows/**"
- ".github/*.yml"
- ".github/*.yaml"
concurrency:
group: workflow-sanity-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
jobs:
no-tabs:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Fail on tabs in workflow files
shell: bash
run: |
set -euo pipefail
python - <<'PY'
from __future__ import annotations
import pathlib
import sys
root = pathlib.Path(".github/workflows")
bad: list[str] = []
for path in sorted(root.rglob("*.yml")):
if b"\t" in path.read_bytes():
bad.append(str(path))
for path in sorted(root.rglob("*.yaml")):
if b"\t" in path.read_bytes():
bad.append(str(path))
if bad:
print("Tabs found in workflow file(s):")
for path in bad:
print(f"- {path}")
sys.exit(1)
PY
actionlint:
runs-on: blacksmith-2vcpu-ubuntu-2404
timeout-minutes: 10
steps:
- name: Checkout
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Lint GitHub workflows
uses: rhysd/actionlint@393031adb9afb225ee52ae2ccd7a5af5525e03e8 # v1.7.11
+15
View File
@@ -1,5 +1,6 @@
/target
firmware/*/target
web/dist/
*.db
*.db-journal
.DS_Store
@@ -29,3 +30,17 @@ venv/
*.pem
credentials.json
.worktrees/
.zeroclaw/*
# Skill eval workspaces (test outputs, transcripts, grading)
.claude/skills/*-workspace/
# Local state backups
.local-state-backups/
*.local-state-backup/
# Coverage artifacts
lcov.info
# IDE's stuff
.idea
+14
View File
@@ -0,0 +1,14 @@
{
"recommendations": [
"rust-lang.rust-analyzer",
"vadimcn.vscode-lldb",
"serayuzgur.crates",
"bungcip.better-toml",
"usernamehw.errorlens",
"eamodio.gitlens",
"tamasfe.even-better-toml",
"dbaeumer.vscode-eslint",
"oderwat.indent-rainbow",
"ryanluker.vscode-coverage-gutters"
]
}
+73
View File
@@ -0,0 +1,73 @@
{
"version": "0.2.0",
"inputs": [
{
"id": "testName",
"description": "Exact test name to debug (e.g. tests::my_test)",
"type": "promptString",
"default": ""
}
],
"configurations": [
// Runtime
{
"type": "lldb",
"request": "launch",
"name": "Debug: Agent",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["agent"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Gateway",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["gateway"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Daemon",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["daemon"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Status",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["status"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
{
"type": "lldb",
"request": "launch",
"name": "Debug: Onboard",
"program": "${workspaceFolder}/target/debug/zeroclaw",
"args": ["onboard"],
"cwd": "${workspaceFolder}",
"preLaunchTask": "Build: Debug"
},
// Test
{
"type": "lldb",
"request": "launch",
"name": "Debug: Test (by name)",
"cargo": {
"args": ["test", "--no-run", "--lib", "--"],
"filter": {
"kind": "lib"
}
},
"args": ["--exact", "${input:testName}", "--nocapture"],
"cwd": "${workspaceFolder}"
}
]
}
+22
View File
@@ -0,0 +1,22 @@
{
"git.autofetch": true,
"git.autofetchPeriod": 90,
"search.exclude": {
"**/target": true
},
"files.watcherExclude": {
"**/target/**": true
},
"[rust]": {
"editor.defaultFormatter": "rust-lang.rust-analyzer"
},
"editor.formatOnSave": true,
"editor.formatOnPaste": true,
"files.autoSave": "afterDelay",
"files.autoSaveDelay": 1000,
"rust-analyzer.check.command": "clippy",
"rust-analyzer.check.extraArgs": ["--all-targets", "--", "-D", "warnings"],
"window.title": "${activeRepositoryBranchName}",
"coverage-gutters.coverageFileNames": ["lcov.info"],
"git.postCommitCommand": "push"
}
+133
View File
@@ -0,0 +1,133 @@
{
"version": "2.0.0",
"inputs": [
{
"id": "testFilter",
"description": "Test name or filter pattern",
"type": "promptString",
"default": ""
}
],
"tasks": [
// Build
{
"label": "Build: Debug",
"type": "shell",
"command": "cargo",
"args": ["build"],
"group": {
"kind": "build",
"isDefault": true
},
"problemMatcher": ["$rustc"]
},
{
"label": "Build: Release",
"type": "shell",
"command": "cargo",
"args": ["build", "--release"],
"problemMatcher": ["$rustc"]
},
{
"label": "Build: Check (fast)",
"type": "shell",
"command": "cargo",
"args": ["check", "--all-targets"],
"problemMatcher": ["$rustc"]
},
// Lint
{
"label": "Lint: Clippy",
"type": "shell",
"command": "cargo",
"args": ["clippy", "--all-targets", "--", "-D", "warnings"],
"problemMatcher": ["$rustc"]
},
{
"label": "Lint: Format Check",
"type": "shell",
"command": "cargo",
"args": ["fmt", "--all", "--", "--check"],
"problemMatcher": []
},
{
"label": "Lint: Format Fix",
"type": "shell",
"command": "cargo",
"args": ["fmt", "--all"],
"problemMatcher": []
},
// Test
{
"label": "Test: All",
"type": "shell",
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run",
"group": {
"kind": "test",
"isDefault": true
},
"problemMatcher": ["$rustc"]
},
{
"label": "Test: Filtered",
"type": "shell",
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run -E 'test(${input:testFilter})'",
"problemMatcher": ["$rustc"]
},
{
"label": "Test: Coverage Report",
"type": "shell",
"command": "cargo llvm-cov --version >/dev/null 2>&1 || cargo install cargo-llvm-cov && cargo llvm-cov --lcov --output-path lcov.info",
"problemMatcher": []
},
{
"label": "Test: Benchmarks",
"type": "shell",
"command": "cargo",
"args": ["bench"],
"problemMatcher": []
},
// Security
{
"label": "Security: Audit",
"type": "shell",
"command": "cargo audit --version >/dev/null 2>&1 || cargo install cargo-audit && cargo audit",
"problemMatcher": []
},
{
"label": "Security: Deny (licenses + sources)",
"type": "shell",
"command": "cargo deny --version >/dev/null 2>&1 || cargo install cargo-deny && cargo deny check licenses sources",
"problemMatcher": []
},
// CI (Docker)
{
"label": "CI: All (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["all"],
"problemMatcher": []
},
{
"label": "CI: Lint (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["lint"],
"problemMatcher": []
},
{
"label": "CI: Test (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["test"],
"problemMatcher": []
},
{
"label": "CI: Security (Docker)",
"type": "shell",
"command": "./dev/ci.sh",
"args": ["security"],
"problemMatcher": []
}
]
}
-484
View File
@@ -1,484 +0,0 @@
# AGENTS.md — ZeroClaw Agent Engineering Protocol
This file defines the default working protocol for coding agents in this repository.
Scope: entire repository.
## 1) Project Snapshot (Read First)
ZeroClaw is a Rust-first autonomous agent runtime optimized for:
- high performance
- high efficiency
- high stability
- high extensibility
- high sustainability
- high security
Core architecture is trait-driven and modular. Most extension work should be done by implementing traits and registering in factory modules.
Key extension points:
- `src/providers/traits.rs` (`Provider`)
- `src/channels/traits.rs` (`Channel`)
- `src/tools/traits.rs` (`Tool`)
- `src/memory/traits.rs` (`Memory`)
- `src/observability/traits.rs` (`Observer`)
- `src/runtime/traits.rs` (`RuntimeAdapter`)
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
## 2) Deep Architecture Observations (Why This Protocol Exists)
These codebase realities should drive every design decision:
1. **Trait + factory architecture is the stability backbone**
- Extension points are intentionally explicit and swappable.
- Most features should be added via trait implementation + factory registration, not cross-cutting rewrites.
2. **Security-critical surfaces are first-class and internet-adjacent**
- `src/gateway/`, `src/security/`, `src/tools/`, `src/runtime/` carry high blast radius.
- Defaults already lean secure-by-default (pairing, bind safety, limits, secret handling); keep it that way.
3. **Performance and binary size are product goals, not nice-to-have**
- `Cargo.toml` release profile and dependency choices optimize for size and determinism.
- Convenience dependencies and broad abstractions can silently regress these goals.
4. **Config and runtime contracts are user-facing API**
- `src/config/schema.rs` and CLI commands are effectively public interfaces.
- Backward compatibility and explicit migration matter.
5. **The project now runs in high-concurrency collaboration mode**
- CI + docs governance + label routing are part of the product delivery system.
- PR throughput is a design constraint; not just a maintainer inconvenience.
## 3) Engineering Principles (Normative)
These principles are mandatory by default. They are not slogans; they are implementation constraints.
### 3.1 KISS (Keep It Simple, Stupid)
**Why here:** Runtime + security behavior must stay auditable under pressure.
Required:
- Prefer straightforward control flow over clever meta-programming.
- Prefer explicit match branches and typed structs over hidden dynamic behavior.
- Keep error paths obvious and localized.
### 3.2 YAGNI (You Aren't Gonna Need It)
**Why here:** Premature features increase attack surface and maintenance burden.
Required:
- Do not add new config keys, trait methods, feature flags, or workflow branches without a concrete accepted use case.
- Do not introduce speculative “future-proof” abstractions without at least one current caller.
- Keep unsupported paths explicit (error out) rather than adding partial fake support.
### 3.3 DRY + Rule of Three
**Why here:** Naive DRY can create brittle shared abstractions across providers/channels/tools.
Required:
- Duplicate small, local logic when it preserves clarity.
- Extract shared utilities only after repeated, stable patterns (rule-of-three).
- When extracting, preserve module boundaries and avoid hidden coupling.
### 3.4 SRP + ISP (Single Responsibility + Interface Segregation)
**Why here:** Trait-driven architecture already encodes subsystem boundaries.
Required:
- Keep each module focused on one concern.
- Extend behavior by implementing existing narrow traits whenever possible.
- Avoid fat interfaces and “god modules” that mix policy + transport + storage.
### 3.5 Fail Fast + Explicit Errors
**Why here:** Silent fallback in agent runtimes can create unsafe or costly behavior.
Required:
- Prefer explicit `bail!`/errors for unsupported or unsafe states.
- Never silently broaden permissions/capabilities.
- Document fallback behavior when fallback is intentional and safe.
### 3.6 Secure by Default + Least Privilege
**Why here:** Gateway/tools/runtime can execute actions with real-world side effects.
Required:
- Deny-by-default for access and exposure boundaries.
- Never log secrets, raw tokens, or sensitive payloads.
- Keep network/filesystem/shell scope as narrow as possible unless explicitly justified.
### 3.7 Determinism + Reproducibility
**Why here:** Reliable CI and low-latency triage depend on deterministic behavior.
Required:
- Prefer reproducible commands and locked dependency behavior in CI-sensitive paths.
- Keep tests deterministic (no flaky timing/network dependence without guardrails).
- Ensure local validation commands map to CI expectations.
### 3.8 Reversibility + Rollback-First Thinking
**Why here:** Fast recovery is mandatory under high PR volume.
Required:
- Keep changes easy to revert (small scope, clear blast radius).
- For risky changes, define rollback path before merge.
- Avoid mixed mega-patches that block safe rollback.
## 4) Repository Map (High-Level)
- `src/main.rs` — CLI entrypoint and command routing
- `src/lib.rs` — module exports and shared command enums
- `src/config/` — schema + config loading/merging
- `src/agent/` — orchestration loop
- `src/gateway/` — webhook/gateway server
- `src/security/` — policy, pairing, secret store
- `src/memory/` — markdown/sqlite memory backends + embeddings/vector merge
- `src/providers/` — model providers and resilient wrapper
- `src/channels/` — Telegram/Discord/Slack/etc channels
- `src/tools/` — tool execution surface (shell, file, memory, browser)
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO); see `docs/hardware-peripherals-design.md`
- `src/runtime/` — runtime adapters (currently native)
- `docs/` — task-oriented documentation system (hubs, unified TOC, references, operations, security proposals, multilingual guides)
- `.github/` — CI, templates, automation workflows
## 4.1 Documentation System Contract (Required)
Treat documentation as a first-class product surface, not a post-merge artifact.
Canonical entry points:
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`, `README.fr.md`, `README.vi.md`
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`, `docs/README.fr.md`, `docs/i18n/vi/README.md`
- unified TOC: `docs/SUMMARY.md`
Supported locales (current contract):
- `en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`
Collection indexes (category navigation):
- `docs/getting-started/README.md`
- `docs/reference/README.md`
- `docs/operations/README.md`
- `docs/security/README.md`
- `docs/hardware/README.md`
- `docs/contributing/README.md`
- `docs/project/README.md`
Runtime-contract references (must track behavior changes):
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
Required docs governance rules:
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
- Keep entry-point parity across all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when changing navigation architecture.
- If a change touches docs IA, runtime-contract references, or user-facing wording in shared docs, perform i18n follow-through for currently supported locales in the same PR:
- Update locale navigation links (`README*`, `docs/README*`, `docs/SUMMARY.md`).
- Update localized runtime-contract docs where equivalents exist (at minimum `commands-reference`, `config-reference`, `troubleshooting` for `fr` and `vi`).
- For Vietnamese, treat `docs/i18n/vi/**` as canonical. Keep `docs/*.<locale>.md` compatibility shims aligned if present.
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
## 5) Risk Tiers by Path (Review Depth Contract)
Use these tiers when deciding validation depth and review rigor.
- **Low risk**: docs/chore/tests-only changes
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
- **High risk**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`, access-control boundaries
When uncertain, classify as higher risk.
## 6) Agent Workflow (Required)
1. **Read before write**
- Inspect existing module, factory wiring, and adjacent tests before editing.
2. **Define scope boundary**
- One concern per PR; avoid mixed feature+refactor+infra patches.
3. **Implement minimal patch**
- Apply KISS/YAGNI/DRY rule-of-three explicitly.
4. **Validate by risk tier**
- Docs-only: lightweight checks.
- Code/risky changes: full relevant checks and focused scenarios.
5. **Document impact**
- Update docs/PR notes for behavior, risk, side effects, and rollback.
- If CLI/config/provider/channel behavior changed, update corresponding runtime-contract references.
- If docs entry points changed, keep all supported locale README/docs-hub navigation aligned (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`).
6. **Respect queue hygiene**
- If stacked PR: declare `Depends on #...`.
- If replacing old PR: declare `Supersedes #...`.
### 6.1 Branch / Commit / PR Flow (Required)
All contributors (human or agent) must follow the same collaboration flow:
- Create and work from a non-`main` branch.
- Commit changes to that branch with clear, scoped commit messages.
- Open a PR to `dev`; do not push directly to `dev` or `main`.
- `main` is reserved for release promotion PRs from `dev`.
- Wait for required checks and review outcomes before merging.
- Merge via PR controls (squash/rebase/merge as repository policy allows).
- Branch deletion after merge is optional; long-lived branches are allowed when intentionally maintained.
### 6.2 Worktree Workflow (Required for Multi-Track Agent Work)
Use Git worktrees to isolate concurrent agent/human tracks safely and predictably:
- Use one worktree per active branch/PR stream to avoid cross-task contamination.
- Keep each worktree on a single branch; do not mix unrelated edits in one worktree.
- Run validation commands inside the corresponding worktree before commit/PR.
- Name worktrees clearly by scope (for example: `wt/ci-hardening`, `wt/provider-fix`) and remove stale worktrees when no longer needed.
- PR checkpoint rules from section 6.1 still apply to worktree-based development.
### 6.3 Code Naming Contract (Required)
Apply these naming rules for all code changes unless a subsystem has a stronger existing pattern.
- Use Rust standard casing consistently: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants/statics `SCREAMING_SNAKE_CASE`.
- Name types and modules by domain role, not implementation detail (for example `DiscordChannel`, `SecurityPolicy`, `MemoryStore` over vague names like `Manager`/`Helper`).
- Keep trait implementer naming explicit and predictable: `<ProviderName>Provider`, `<ChannelName>Channel`, `<ToolName>Tool`, `<BackendName>Memory`.
- Keep factory registration keys stable, lowercase, and user-facing (for example `"openai"`, `"discord"`, `"shell"`), and avoid alias sprawl without migration need.
- Name tests by behavior/outcome (`<subject>_<expected_behavior>`) and keep fixture identifiers neutral/project-scoped.
- If identity-like naming is required in tests/examples, use ZeroClaw-native labels only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
### 6.4 Architecture Boundary Contract (Required)
Use these rules to keep the trait/factory architecture stable under growth.
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
- Avoid creating cross-subsystem coupling (for example provider code importing channel internals, tool code mutating gateway policy directly).
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller in current scope.
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
## 7) Change Playbooks
### 7.1 Adding a Provider
- Implement `Provider` in `src/providers/`.
- Register in `src/providers/mod.rs` factory.
- Add focused tests for factory wiring and error paths.
- Avoid provider-specific behavior leaks into shared orchestration code.
### 7.2 Adding a Channel
- Implement `Channel` in `src/channels/`.
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
- Cover auth/allowlist/health behavior with tests.
### 7.3 Adding a Tool
- Implement `Tool` in `src/tools/` with strict parameter schema.
- Validate and sanitize all inputs.
- Return structured `ToolResult`; avoid panics in runtime path.
### 7.4 Adding a Peripheral
- Implement `Peripheral` in `src/peripherals/`.
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
- Register board type in config schema if needed.
- See `docs/hardware-peripherals-design.md` for protocol and firmware notes.
### 7.5 Security / Runtime / Gateway Changes
- Include threat/risk notes and rollback strategy.
- Add/update tests or validation evidence for failure modes and boundaries.
- Keep observability useful but non-sensitive.
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/actions-source-policy.md` when sources change.
### 7.6 Docs System / README / IA Changes
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
- When runtime surfaces change, update related references (`commands/providers/channels/config/runbook/troubleshooting`).
- Keep multilingual entry-point parity for all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when nav or key wording changes.
- When shared docs wording changes, sync corresponding localized docs for supported locales in the same PR (or explicitly document deferral and follow-up PR).
- For docs snapshots, add new date-stamped files for new sprints rather than rewriting historical context.
## 8) Validation Matrix
Default local checks for code changes:
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test
```
Preferred local pre-PR validation path (recommended, not required):
```bash
./dev/ci.sh all
```
Notes:
- Local Docker-based CI is strongly recommended when Docker is available.
- Contributors are not blocked from opening a PR if local Docker CI is unavailable; in that case run the most relevant native checks and document what was run.
Additional expectations by change type:
- **Docs/template-only**:
- run markdown lint and link-integrity checks
- if touching README/docs-hub/SUMMARY/collection indexes, verify EN/ZH/JA/RU navigation parity
- if touching bootstrap docs/scripts, run `bash -n bootstrap.sh scripts/bootstrap.sh scripts/install.sh`
- **Workflow changes**: validate YAML syntax; run workflow lint/sanity checks when available.
- **Security/runtime/gateway/tools**: include at least one boundary/failure-mode validation.
If full checks are impractical, run the most relevant subset and document what was skipped and why.
## 9) Collaboration and PR Discipline
- Follow `.github/pull_request_template.md` fully (including side effects / blast radius).
- Keep PR descriptions concrete: problem, change, non-goals, risk, rollback.
- Use conventional commit titles.
- Prefer small PRs (`size: XS/S/M`) when possible.
- Agent-assisted PRs are welcome, **but contributors remain accountable for understanding what their code will do**.
### 9.1 Privacy/Sensitive Data and Neutral Wording (Required)
Treat privacy and neutrality as merge gates, not best-effort guidelines.
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
- Use neutral project-scoped placeholders (for example: `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`) and avoid real-world personas.
- Recommended identity-safe naming palette (use when identity-like context is required):
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
- If reproducing external incidents, redact and anonymize all payloads before committing.
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
### 9.2 Superseded-PR Attribution (Required)
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email) so attribution is rendered correctly.
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
### 9.3 Superseded-PR PR Template (Recommended)
When superseding multiple PRs, use a consistent title/body structure to reduce reviewer ambiguity.
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
- If this is docs/chore/meta only, keep the same supersede suffix and use the appropriate conventional-commit type.
- In the PR body, include the following template (fill placeholders, remove non-applicable lines):
```md
## Supersedes
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
## Integrated Scope
- From #<pr_a>: <what was materially incorporated>
- From #<pr_b>: <what was materially incorporated>
- From #<pr_n>: <what was materially incorporated>
## Attribution
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
- If No, explain why (for example: no direct code/design carry-over)
## Non-goals
- <explicitly list what was not carried over>
## Risk and Rollback
- Risk: <summary>
- Rollback: <revert commit/PR strategy>
```
### 9.4 Superseded-PR Commit Template (Recommended)
When a commit unifies or supersedes prior PR work, use a deterministic commit message layout so attribution is machine-parsed and reviewer-friendly.
- Keep one blank line between message sections, and exactly one blank line before trailer lines.
- Keep each trailer on its own line; do not wrap, indent, or encode as escaped `\n` text.
- Add one `Co-authored-by` trailer per materially incorporated contributor, using GitHub-recognized email.
- If no direct code/design is carried over, omit `Co-authored-by` and explain attribution in the PR body instead.
```text
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
<one-paragraph summary of integrated outcome>
Supersedes:
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
Integrated scope:
- <subsystem_or_feature_a>: from #<pr_x>
- <subsystem_or_feature_b>: from #<pr_y>
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
```
Reference docs:
- `CONTRIBUTING.md`
- `docs/README.md`
- `docs/SUMMARY.md`
- `docs/docs-inventory.md`
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
- `docs/pr-workflow.md`
- `docs/reviewer-playbook.md`
- `docs/ci-map.md`
- `docs/actions-source-policy.md`
## 10) Anti-Patterns (Do Not)
- Do not add heavy dependencies for minor convenience.
- Do not silently weaken security policy or access constraints.
- Do not add speculative config/feature flags “just in case”.
- Do not mix massive formatting-only changes with functional changes.
- Do not modify unrelated modules “while here”.
- Do not bypass failing checks without explicit explanation.
- Do not hide behavior-changing side effects in refactor commits.
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
## 11) Handoff Template (Agent -> Agent / Maintainer)
When handing off work, include:
1. What changed
2. What did not change
3. Validation run and results
4. Remaining risks / unknowns
5. Next recommended action
## 12) Vibe Coding Guardrails
When working in fast iterative mode:
- Keep each iteration reversible (small commits, clear rollback).
- Validate assumptions with code search before implementing.
- Prefer deterministic behavior over clever shortcuts.
- Do not “ship and hope” on security-sensitive paths.
- If uncertain, leave a concrete TODO with verification context, not a hidden guess.
Symlink
+1
View File
@@ -0,0 +1 @@
CLAUDE.md
-66
View File
@@ -1,67 +1 @@
# Changelog
All notable changes to ZeroClaw will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Security
- **Legacy XOR cipher migration**: The `enc:` prefix (XOR cipher) is now deprecated.
Secrets using this format will be automatically migrated to `enc2:` (ChaCha20-Poly1305 AEAD)
when decrypted via `decrypt_and_migrate()`. A `tracing::warn!` is emitted when legacy
values are encountered. The XOR cipher will be removed in a future release.
### Added
- `SecretStore::decrypt_and_migrate()` — Decrypts secrets and returns a migrated `enc2:`
value if the input used the legacy `enc:` format
- `SecretStore::needs_migration()` — Check if a value uses the legacy `enc:` format
- `SecretStore::is_secure_encrypted()` — Check if a value uses the secure `enc2:` format
- **Telegram mention_only mode** — New config option `mention_only` for Telegram channel.
When enabled, bot only responds to messages that @-mention the bot in group chats.
Direct messages always work regardless of this setting. Default: `false`.
### Deprecated
- `enc:` prefix for encrypted secrets — Use `enc2:` (ChaCha20-Poly1305) instead.
Legacy values are still decrypted for backward compatibility but should be migrated.
### Fixed
- **Gemini thinking model support** — Responses from thinking models (e.g. `gemini-3-pro-preview`)
are now handled correctly. The provider skips internal reasoning parts (`thought: true`) and
signature parts (`thoughtSignature`), extracting only the final answer text. Falls back to
thinking content when no non-thinking response is available.
- Updated default gateway port to `42617`.
- Removed all user-facing references to port `3000`.
- **Onboarding channel menu dispatch** now uses an enum-backed selector instead of hard-coded
numeric match arms, preventing duplicated pattern arms and related `unreachable pattern`
compiler warnings in `src/onboard/wizard.rs`.
- **OpenAI native tool spec parsing** now uses owned serializable/deserializable structs,
fixing a compile-time type mismatch when validating tool schemas before API calls.
## [0.1.0] - 2026-02-13
### Added
- **Core Architecture**: Trait-based pluggable system for Provider, Channel, Observer, RuntimeAdapter, Tool
- **Provider**: OpenRouter implementation (access Claude, GPT-4, Llama, Gemini via single API)
- **Channels**: CLI channel with interactive and single-message modes
- **Observability**: NoopObserver (zero overhead), LogObserver (tracing), MultiObserver (fan-out)
- **Security**: Workspace sandboxing, command allowlisting, path traversal blocking, autonomy levels (ReadOnly/Supervised/Full), rate limiting
- **Tools**: Shell (sandboxed), FileRead (path-checked), FileWrite (path-checked)
- **Memory (Brain)**: SQLite persistent backend (searchable, survives restarts), Markdown backend (plain files, human-readable)
- **Heartbeat Engine**: Periodic task execution from HEARTBEAT.md
- **Runtime**: Native adapter for Mac/Linux/Raspberry Pi
- **Config**: TOML-based configuration with sensible defaults
- **Onboarding**: Interactive CLI wizard with workspace scaffolding
- **CLI Commands**: agent, gateway, status, cron, channel, tools, onboard
- **CI/CD**: GitHub Actions with cross-platform builds (Linux, macOS Intel/ARM, Windows)
- **Tests**: 159 inline tests covering all modules and edge cases
- **Binary**: 3.1MB optimized release build (includes bundled SQLite)
### Security
- Path traversal attack prevention
- Command injection blocking
- Workspace escape prevention
- Forbidden system path protection (`/etc`, `/root`, `~/.ssh`)
[0.1.0]: https://github.com/theonlyhennygod/zeroclaw/releases/tag/v0.1.0
+41 -434
View File
@@ -1,20 +1,26 @@
# CLAUDE.md — ZeroClaw Agent Engineering Protocol
# CLAUDE.md — ZeroClaw
This file defines the default working protocol for Claude agents in this repository.
Scope: entire repository.
## Commands
## 1) Project Snapshot (Read First)
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test
```
ZeroClaw is a Rust-first autonomous agent runtime optimized for:
Full pre-PR validation (recommended):
- high performance
- high efficiency
- high stability
- high extensibility
- high sustainability
- high security
```bash
./dev/ci.sh all
```
Core architecture is trait-driven and modular. Most extension work should be done by implementing traits and registering in factory modules.
Docs-only changes: run markdown lint and link-integrity checks. If touching bootstrap scripts: `bash -n install.sh`.
## Project Snapshot
ZeroClaw is a Rust-first autonomous agent runtime optimized for performance, efficiency, stability, extensibility, sustainability, and security.
Core architecture is trait-driven and modular. Extend by implementing traits and registering in factory modules.
Key extension points:
@@ -26,111 +32,7 @@ Key extension points:
- `src/runtime/traits.rs` (`RuntimeAdapter`)
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
## 2) Deep Architecture Observations (Why This Protocol Exists)
These codebase realities should drive every design decision:
1. **Trait + factory architecture is the stability backbone**
- Extension points are intentionally explicit and swappable.
- Most features should be added via trait implementation + factory registration, not cross-cutting rewrites.
2. **Security-critical surfaces are first-class and internet-adjacent**
- `src/gateway/`, `src/security/`, `src/tools/`, `src/runtime/` carry high blast radius.
- Defaults already lean secure-by-default (pairing, bind safety, limits, secret handling); keep it that way.
3. **Performance and binary size are product goals, not nice-to-have**
- `Cargo.toml` release profile and dependency choices optimize for size and determinism.
- Convenience dependencies and broad abstractions can silently regress these goals.
4. **Config and runtime contracts are user-facing API**
- `src/config/schema.rs` and CLI commands are effectively public interfaces.
- Backward compatibility and explicit migration matter.
5. **The project now runs in high-concurrency collaboration mode**
- CI + docs governance + label routing are part of the product delivery system.
- PR throughput is a design constraint; not just a maintainer inconvenience.
## 3) Engineering Principles (Normative)
These principles are mandatory by default. They are not slogans; they are implementation constraints.
### 3.1 KISS (Keep It Simple, Stupid)
**Why here:** Runtime + security behavior must stay auditable under pressure.
Required:
- Prefer straightforward control flow over clever meta-programming.
- Prefer explicit match branches and typed structs over hidden dynamic behavior.
- Keep error paths obvious and localized.
### 3.2 YAGNI (You Aren't Gonna Need It)
**Why here:** Premature features increase attack surface and maintenance burden.
Required:
- Do not add new config keys, trait methods, feature flags, or workflow branches without a concrete accepted use case.
- Do not introduce speculative “future-proof” abstractions without at least one current caller.
- Keep unsupported paths explicit (error out) rather than adding partial fake support.
### 3.3 DRY + Rule of Three
**Why here:** Naive DRY can create brittle shared abstractions across providers/channels/tools.
Required:
- Duplicate small, local logic when it preserves clarity.
- Extract shared utilities only after repeated, stable patterns (rule-of-three).
- When extracting, preserve module boundaries and avoid hidden coupling.
### 3.4 SRP + ISP (Single Responsibility + Interface Segregation)
**Why here:** Trait-driven architecture already encodes subsystem boundaries.
Required:
- Keep each module focused on one concern.
- Extend behavior by implementing existing narrow traits whenever possible.
- Avoid fat interfaces and “god modules” that mix policy + transport + storage.
### 3.5 Fail Fast + Explicit Errors
**Why here:** Silent fallback in agent runtimes can create unsafe or costly behavior.
Required:
- Prefer explicit `bail!`/errors for unsupported or unsafe states.
- Never silently broaden permissions/capabilities.
- Document fallback behavior when fallback is intentional and safe.
### 3.6 Secure by Default + Least Privilege
**Why here:** Gateway/tools/runtime can execute actions with real-world side effects.
Required:
- Deny-by-default for access and exposure boundaries.
- Never log secrets, raw tokens, or sensitive payloads.
- Keep network/filesystem/shell scope as narrow as possible unless explicitly justified.
### 3.7 Determinism + Reproducibility
**Why here:** Reliable CI and low-latency triage depend on deterministic behavior.
Required:
- Prefer reproducible commands and locked dependency behavior in CI-sensitive paths.
- Keep tests deterministic (no flaky timing/network dependence without guardrails).
- Ensure local validation commands map to CI expectations.
### 3.8 Reversibility + Rollback-First Thinking
**Why here:** Fast recovery is mandatory under high PR volume.
Required:
- Keep changes easy to revert (small scope, clear blast radius).
- For risky changes, define rollback path before merge.
- Avoid mixed mega-patches that block safe rollback.
## 4) Repository Map (High-Level)
## Repository Map
- `src/main.rs` — CLI entrypoint and command routing
- `src/lib.rs` — module exports and shared command enums
@@ -142,59 +44,12 @@ Required:
- `src/providers/` — model providers and resilient wrapper
- `src/channels/` — Telegram/Discord/Slack/etc channels
- `src/tools/` — tool execution surface (shell, file, memory, browser)
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO); see `docs/hardware-peripherals-design.md`
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO)
- `src/runtime/` — runtime adapters (currently native)
- `docs/` — task-oriented documentation system (hubs, unified TOC, references, operations, security proposals, multilingual guides)
- `docs/` — topic-based documentation (setup-guides, reference, ops, security, hardware, contributing, maintainers)
- `.github/` — CI, templates, automation workflows
## 4.1 Documentation System Contract (Required)
Treat documentation as a first-class product surface, not a post-merge artifact.
Canonical entry points:
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`, `README.fr.md`, `README.vi.md`
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`, `docs/README.fr.md`, `docs/i18n/vi/README.md`
- unified TOC: `docs/SUMMARY.md`
Supported locales (current contract):
- `en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`
Collection indexes (category navigation):
- `docs/getting-started/README.md`
- `docs/reference/README.md`
- `docs/operations/README.md`
- `docs/security/README.md`
- `docs/hardware/README.md`
- `docs/contributing/README.md`
- `docs/project/README.md`
Runtime-contract references (must track behavior changes):
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
Required docs governance rules:
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
- Keep entry-point parity across all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when changing navigation architecture.
- If a change touches docs IA, runtime-contract references, or user-facing wording in shared docs, perform i18n follow-through for currently supported locales in the same PR:
- Update locale navigation links (`README*`, `docs/README*`, `docs/SUMMARY.md`).
- Update localized runtime-contract docs where equivalents exist (at minimum `commands-reference`, `config-reference`, `troubleshooting` for `fr` and `vi`).
- For Vietnamese, treat `docs/i18n/vi/**` as canonical. Keep `docs/*.<locale>.md` compatibility shims aligned if present.
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
## 5) Risk Tiers by Path (Review Depth Contract)
Use these tiers when deciding validation depth and review rigor.
## Risk Tiers
- **Low risk**: docs/chore/tests-only changes
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
@@ -202,282 +57,34 @@ Use these tiers when deciding validation depth and review rigor.
When uncertain, classify as higher risk.
## 6) Agent Workflow (Required)
## Workflow
1. **Read before write**
- Inspect existing module, factory wiring, and adjacent tests before editing.
2. **Define scope boundary**
- One concern per PR; avoid mixed feature+refactor+infra patches.
3. **Implement minimal patch**
- Apply KISS/YAGNI/DRY rule-of-three explicitly.
4. **Validate by risk tier**
- Docs-only: lightweight checks.
- Code/risky changes: full relevant checks and focused scenarios.
5. **Document impact**
- Update docs/PR notes for behavior, risk, side effects, and rollback.
- If CLI/config/provider/channel behavior changed, update corresponding runtime-contract references.
- If docs entry points changed, keep all supported locale README/docs-hub navigation aligned (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`).
6. **Respect queue hygiene**
- If stacked PR: declare `Depends on #...`.
- If replacing old PR: declare `Supersedes #...`.
1. **Read before write** — inspect existing module, factory wiring, and adjacent tests before editing.
2. **One concern per PR** — avoid mixed feature+refactor+infra patches.
3. **Implement minimal patch** — no speculative abstractions, no config keys without a concrete use case.
4. **Validate by risk tier** — docs-only: lightweight checks. Code changes: full relevant checks.
5. **Document impact** — update PR notes for behavior, risk, side effects, and rollback.
6. **Queue hygiene** — stacked PR: declare `Depends on #...`. Replacing old PR: declare `Supersedes #...`.
### 6.1 Branch / Commit / PR Flow (Required)
Branch/commit/PR rules:
- Work from a non-`master` branch. Open a PR to `master`; do not push directly.
- Use conventional commit titles. Prefer small PRs (`size: XS/S/M`).
- Follow `.github/pull_request_template.md` fully.
- Never commit secrets, personal data, or real identity information (see `@docs/contributing/pr-discipline.md`).
All contributors (human or agent) must follow the same collaboration flow:
- Create and work from a non-`main` branch.
- Commit changes to that branch with clear, scoped commit messages.
- Open a PR to `main`; do not push directly to `main`.
- Wait for required checks and review outcomes before merging.
- Merge via PR controls (squash/rebase/merge as repository policy allows).
- Branch deletion after merge is optional; long-lived branches are allowed when intentionally maintained.
### 6.2 Worktree Workflow (Required for Multi-Track Agent Work)
Use Git worktrees to isolate concurrent agent/human tracks safely and predictably:
- Use one worktree per active branch/PR stream to avoid cross-task contamination.
- Keep each worktree on a single branch; do not mix unrelated edits in one worktree.
- Run validation commands inside the corresponding worktree before commit/PR.
- Name worktrees clearly by scope (for example: `wt/ci-hardening`, `wt/provider-fix`) and remove stale worktrees when no longer needed.
- PR checkpoint rules from section 6.1 still apply to worktree-based development.
### 6.3 Code Naming Contract (Required)
Apply these naming rules for all code changes unless a subsystem has a stronger existing pattern.
- Use Rust standard casing consistently: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants/statics `SCREAMING_SNAKE_CASE`.
- Name types and modules by domain role, not implementation detail (for example `DiscordChannel`, `SecurityPolicy`, `MemoryStore` over vague names like `Manager`/`Helper`).
- Keep trait implementer naming explicit and predictable: `<ProviderName>Provider`, `<ChannelName>Channel`, `<ToolName>Tool`, `<BackendName>Memory`.
- Keep factory registration keys stable, lowercase, and user-facing (for example `"openai"`, `"discord"`, `"shell"`), and avoid alias sprawl without migration need.
- Name tests by behavior/outcome (`<subject>_<expected_behavior>`) and keep fixture identifiers neutral/project-scoped.
- If identity-like naming is required in tests/examples, use ZeroClaw-native labels only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
### 6.4 Architecture Boundary Contract (Required)
Use these rules to keep the trait/factory architecture stable under growth.
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
- Avoid creating cross-subsystem coupling (for example provider code importing channel internals, tool code mutating gateway policy directly).
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller in current scope.
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
## 7) Change Playbooks
### 7.1 Adding a Provider
- Implement `Provider` in `src/providers/`.
- Register in `src/providers/mod.rs` factory.
- Add focused tests for factory wiring and error paths.
- Avoid provider-specific behavior leaks into shared orchestration code.
### 7.2 Adding a Channel
- Implement `Channel` in `src/channels/`.
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
- Cover auth/allowlist/health behavior with tests.
### 7.3 Adding a Tool
- Implement `Tool` in `src/tools/` with strict parameter schema.
- Validate and sanitize all inputs.
- Return structured `ToolResult`; avoid panics in runtime path.
### 7.4 Adding a Peripheral
- Implement `Peripheral` in `src/peripherals/`.
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
- Register board type in config schema if needed.
- See `docs/hardware-peripherals-design.md` for protocol and firmware notes.
### 7.5 Security / Runtime / Gateway Changes
- Include threat/risk notes and rollback strategy.
- Add/update tests or validation evidence for failure modes and boundaries.
- Keep observability useful but non-sensitive.
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/actions-source-policy.md` when sources change.
### 7.6 Docs System / README / IA Changes
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
- When runtime surfaces change, update related references (`commands/providers/channels/config/runbook/troubleshooting`).
- Keep multilingual entry-point parity for all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when nav or key wording changes.
- When shared docs wording changes, sync corresponding localized docs for supported locales in the same PR (or explicitly document deferral and follow-up PR).
- For docs snapshots, add new date-stamped files for new sprints rather than rewriting historical context.
## 8) Validation Matrix
Default local checks for code changes:
```bash
cargo fmt --all -- --check
cargo clippy --all-targets -- -D warnings
cargo test
```
Preferred local pre-PR validation path (recommended, not required):
```bash
./dev/ci.sh all
```
Notes:
- Local Docker-based CI is strongly recommended when Docker is available.
- Contributors are not blocked from opening a PR if local Docker CI is unavailable; in that case run the most relevant native checks and document what was run.
Additional expectations by change type:
- **Docs/template-only**:
- run markdown lint and link-integrity checks
- if touching README/docs-hub/SUMMARY/collection indexes, verify EN/ZH/JA/RU navigation parity
- if touching bootstrap docs/scripts, run `bash -n bootstrap.sh scripts/bootstrap.sh scripts/install.sh`
- **Workflow changes**: validate YAML syntax; run workflow lint/sanity checks when available.
- **Security/runtime/gateway/tools**: include at least one boundary/failure-mode validation.
If full checks are impractical, run the most relevant subset and document what was skipped and why.
## 9) Collaboration and PR Discipline
- Follow `.github/pull_request_template.md` fully (including side effects / blast radius).
- Keep PR descriptions concrete: problem, change, non-goals, risk, rollback.
- Use conventional commit titles.
- Prefer small PRs (`size: XS/S/M`) when possible.
- Agent-assisted PRs are welcome, **but contributors remain accountable for understanding what their code will do**.
### 9.1 Privacy/Sensitive Data and Neutral Wording (Required)
Treat privacy and neutrality as merge gates, not best-effort guidelines.
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
- Use neutral project-scoped placeholders (for example: `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`) and avoid real-world personas.
- Recommended identity-safe naming palette (use when identity-like context is required):
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
- If reproducing external incidents, redact and anonymize all payloads before committing.
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
### 9.2 Superseded-PR Attribution (Required)
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email) so attribution is rendered correctly.
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
### 9.3 Superseded-PR PR Template (Recommended)
When superseding multiple PRs, use a consistent title/body structure to reduce reviewer ambiguity.
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
- If this is docs/chore/meta only, keep the same supersede suffix and use the appropriate conventional-commit type.
- In the PR body, include the following template (fill placeholders, remove non-applicable lines):
```md
## Supersedes
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
## Integrated Scope
- From #<pr_a>: <what was materially incorporated>
- From #<pr_b>: <what was materially incorporated>
- From #<pr_n>: <what was materially incorporated>
## Attribution
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
- If No, explain why (for example: no direct code/design carry-over)
## Non-goals
- <explicitly list what was not carried over>
## Risk and Rollback
- Risk: <summary>
- Rollback: <revert commit/PR strategy>
```
### 9.4 Superseded-PR Commit Template (Recommended)
When a commit unifies or supersedes prior PR work, use a deterministic commit message layout so attribution is machine-parsed and reviewer-friendly.
- Keep one blank line between message sections, and exactly one blank line before trailer lines.
- Keep each trailer on its own line; do not wrap, indent, or encode as escaped `\n` text.
- Add one `Co-authored-by` trailer per materially incorporated contributor, using GitHub-recognized email.
- If no direct code/design is carried over, omit `Co-authored-by` and explain attribution in the PR body instead.
```text
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
<one-paragraph summary of integrated outcome>
Supersedes:
- #<pr_a> by @<author_a>
- #<pr_b> by @<author_b>
- #<pr_n> by @<author_n>
Integrated scope:
- <subsystem_or_feature_a>: from #<pr_x>
- <subsystem_or_feature_b>: from #<pr_y>
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
```
Reference docs:
- `CONTRIBUTING.md`
- `docs/README.md`
- `docs/SUMMARY.md`
- `docs/docs-inventory.md`
- `docs/commands-reference.md`
- `docs/providers-reference.md`
- `docs/channels-reference.md`
- `docs/config-reference.md`
- `docs/operations-runbook.md`
- `docs/troubleshooting.md`
- `docs/one-click-bootstrap.md`
- `docs/pr-workflow.md`
- `docs/reviewer-playbook.md`
- `docs/ci-map.md`
- `docs/actions-source-policy.md`
## 10) Anti-Patterns (Do Not)
## Anti-Patterns
- Do not add heavy dependencies for minor convenience.
- Do not silently weaken security policy or access constraints.
- Do not add speculative config/feature flags just in case.
- Do not add speculative config/feature flags "just in case".
- Do not mix massive formatting-only changes with functional changes.
- Do not modify unrelated modules while here.
- Do not modify unrelated modules "while here".
- Do not bypass failing checks without explicit explanation.
- Do not hide behavior-changing side effects in refactor commits.
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
## 11) Handoff Template (Agent -> Agent / Maintainer)
## Linked References
When handing off work, include:
1. What changed
2. What did not change
3. Validation run and results
4. Remaining risks / unknowns
5. Next recommended action
## 12) Vibe Coding Guardrails
When working in fast iterative mode:
- Keep each iteration reversible (small commits, clear rollback).
- Validate assumptions with code search before implementing.
- Prefer deterministic behavior over clever shortcuts.
- Do not “ship and hope” on security-sensitive paths.
- If uncertain, leave a concrete TODO with verification context, not a hidden guess.
- `@docs/contributing/change-playbooks.md` — adding providers, channels, tools, peripherals; security/gateway changes; architecture boundaries
- `@docs/contributing/pr-discipline.md` — privacy rules, superseded-PR attribution/templates, handoff template
- `@docs/contributing/docs-contract.md` — docs system contract, i18n rules, locale parity
+1 -1
View File
@@ -60,7 +60,7 @@ representative at an online or offline event.
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
https://x.com/willsarg617.
https://x.com/argenistherose.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
+49 -13
View File
@@ -2,6 +2,42 @@
Thanks for your interest in contributing to ZeroClaw! This guide will help you get started.
---
## ⚠️ Branch Migration Notice (March 2026)
**`master` is the ONLY default branch. The `main` branch no longer exists.**
If you have an existing fork or local clone that tracks `main`, you **must** update it:
```bash
# Update your local clone to track master
git checkout master
git branch -D main 2>/dev/null # delete local main if it exists
git remote set-head origin master
git fetch origin --prune # remove stale remote refs
# If your fork still has a main branch, delete it
git push origin --delete main 2>/dev/null
```
All PRs must target **`master`**. PRs targeting `main` will be rejected.
**Background:** ZeroClaw previously used `main` in some documentation and scripts, which caused 404 errors, broken CI refs, and contributor confusion (see [#2929](https://github.com/zeroclaw-labs/zeroclaw/issues/2929), [#3061](https://github.com/zeroclaw-labs/zeroclaw/issues/3061), [#3194](https://github.com/zeroclaw-labs/zeroclaw/pull/3194)). As of March 2026, all references have been corrected, stale branches cleaned up, and the `main` branch permanently deleted.
---
## Branching Model
> **`master`** is the single source-of-truth branch.
>
> **How contributors should work:**
> 1. Fork the repository
> 2. Create a `feat/*` or `fix/*` branch from `master`
> 3. Open a PR targeting `master`
>
> Do **not** create or push to a `main` branch. There is no `main` branch — it will not work.
## First-Time Contributors
Welcome — contributions of all sizes are valued. If this is your first contribution, here is how to get started:
@@ -15,9 +51,9 @@ Welcome — contributions of all sizes are valued. If this is your first contrib
3. **Follow the fork → branch → change → test → PR workflow:**
- Fork the repository and clone your fork
- Create a feature branch (`git checkout -b fix/my-change`)
- Create a feature branch (`git checkout -b feat/my-change` or `git checkout -b fix/my-change`)
- Make your changes and run `cargo fmt && cargo clippy && cargo test`
- Open a PR against `dev` using the PR template
- Open a PR against `master` using the PR template
4. **Start with Track A.** ZeroClaw uses three [collaboration tracks](#collaboration-tracks-risk-based) (A/B/C) based on risk. First-time contributors should target **Track A** (docs, tests, chore) — these require lighter review and are the fastest path to a merged PR.
@@ -210,20 +246,20 @@ To keep docs useful under high PR volume, we use these rules:
- **Side-effect visibility**: document blast radius, failure modes, and rollback before merge.
- **Automation assists, humans decide**: bots triage and label, but merge accountability stays human.
- **Index-first discoverability**: `docs/README.md` is the first entry point for operational documentation.
- **Template-first authoring**: start new operational docs from `docs/doc-template.md`.
- **Template-first authoring**: start new operational docs from `docs/contributing/doc-template.md`.
### Documentation System Map
| Doc | Primary purpose | When to update |
|---|---|---|
| `docs/README.md` | canonical docs index and taxonomy | add/remove docs or change documentation ownership/navigation |
| `docs/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
| `docs/contributing/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
| `CONTRIBUTING.md` | contributor contract and readiness baseline | contributor expectations or policy changes |
| `docs/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
| `docs/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
| `docs/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
| `docs/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
| `docs/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
| `docs/contributing/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
| `docs/contributing/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
| `docs/contributing/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
| `docs/ops/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
| `docs/ops/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
## PR Definition of Ready (DoR)
@@ -237,7 +273,7 @@ Before requesting review, ensure all of the following are true:
- Tests/fixtures/examples use neutral project-scoped wording (no identity-specific or first-person phrasing).
- If identity-like wording is required, use ZeroClaw-centric labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`).
- If docs were changed, update `docs/README.md` navigation and reciprocal links with related docs.
- If a new operational doc was added, start from `docs/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
- If a new operational doc was added, start from `docs/contributing/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
- Linked issue (or rationale for no issue) is included.
## PR Definition of Done (DoD)
@@ -265,9 +301,9 @@ When PR traffic is high (especially with AI-assisted contributions), these rules
- **Identity normalization**: when identity traits are unavoidable, use ZeroClaw/project-native roles instead of personal or real-world identities.
- **Supersede hygiene**: if your PR replaces an older open PR, add `Supersedes #...` and request maintainers close the outdated one.
Full maintainer workflow: [`docs/pr-workflow.md`](docs/pr-workflow.md).
CI workflow ownership and triage map: [`docs/ci-map.md`](docs/ci-map.md).
Reviewer operating checklist: [`docs/reviewer-playbook.md`](docs/reviewer-playbook.md).
Full maintainer workflow: [`docs/contributing/pr-workflow.md`](docs/contributing/pr-workflow.md).
CI workflow ownership and triage map: [`docs/contributing/ci-map.md`](docs/contributing/ci-map.md).
Reviewer operating checklist: [`docs/contributing/reviewer-playbook.md`](docs/contributing/reviewer-playbook.md).
## Agent Collaboration Guidance
Generated
+204 -292
View File
File diff suppressed because it is too large Load Diff
+38 -9
View File
@@ -4,7 +4,7 @@ resolver = "2"
[package]
name = "zeroclaw"
version = "0.1.7"
version = "0.2.0"
edition = "2021"
authors = ["theonlyhennygod"]
license = "MIT OR Apache-2.0"
@@ -21,7 +21,7 @@ clap = { version = "4.5", features = ["derive"] }
clap_complete = "4.5"
# Async runtime - feature-optimized for size
tokio = { version = "1.42", default-features = false, features = ["rt-multi-thread", "macros", "time", "net", "io-util", "sync", "process", "io-std", "fs", "signal"] }
tokio = { version = "1.50", default-features = false, features = ["rt-multi-thread", "macros", "time", "net", "io-util", "sync", "process", "io-std", "fs", "signal"] }
tokio-util = { version = "0.7", default-features = false }
tokio-stream = { version = "0.1.18", default-features = false, features = ["fs", "sync"] }
@@ -48,8 +48,8 @@ schemars = "1.2"
tracing = { version = "0.1", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt", "ansi", "env-filter"] }
# Observability - Prometheus metrics
prometheus = { version = "0.14", default-features = false }
# Observability - Prometheus metrics (optional; requires AtomicU64, unavailable on 32-bit)
prometheus = { version = "0.14", default-features = false, optional = true }
# Base64 encoding (screenshots, image data)
base64 = "0.22"
@@ -62,14 +62,14 @@ urlencoding = "2.1"
nanohtml2text = "0.2"
# Optional Rust-native browser automation backend
fantoccini = { version = "0.22.0", optional = true, default-features = false, features = ["rustls-tls"] }
fantoccini = { version = "0.22.1", optional = true, default-features = false, features = ["rustls-tls"] }
# Error handling
anyhow = "1.0"
thiserror = "2.0"
# UUID generation
uuid = { version = "1.11", default-features = false, features = ["v4", "std"] }
uuid = { version = "1.22", default-features = false, features = ["v4", "std"] }
# Authenticated encryption (AEAD) for secret store
chacha20poly1305 = "0.10"
@@ -82,6 +82,9 @@ hex = "0.4"
# CSPRNG for secure token generation
rand = "0.10"
# Portable atomic fallbacks for targets without native 64-bit atomics
portable-atomic = "1"
# serde-big-array for wa-rs storage (large array serialization)
serde-big-array = { version = "0.5", optional = true }
@@ -117,7 +120,7 @@ which = "8.0"
# WebSocket client channels (Discord/Lark/DingTalk/Nostr)
tokio-tungstenite = { version = "0.28", features = ["rustls-tls-webpki-roots"] }
futures-util = { version = "0.3", default-features = false, features = ["sink"] }
nostr-sdk = { version = "0.44", default-features = false, features = ["nip04", "nip59"] }
nostr-sdk = { version = "0.44", default-features = false, features = ["nip04", "nip59"], optional = true }
regex = "1.10"
hostname = "0.4.2"
rustls = "0.23"
@@ -184,11 +187,14 @@ landlock = { version = "0.4", optional = true }
libc = "0.2"
[features]
default = []
default = ["observability-prometheus", "channel-nostr"]
channel-nostr = ["dep:nostr-sdk"]
hardware = ["nusb", "tokio-serial"]
channel-matrix = ["dep:matrix-sdk"]
channel-lark = ["dep:prost"]
channel-feishu = ["channel-lark"] # Alias for Feishu users (Lark and Feishu are the same platform)
memory-postgres = ["dep:postgres"]
observability-prometheus = ["dep:prometheus"]
observability-otel = ["dep:opentelemetry", "dep:opentelemetry_sdk", "dep:opentelemetry-otlp"]
peripheral-rpi = ["rppal"]
# Browser backend feature alias used by cfg(feature = "browser-native")
@@ -200,6 +206,8 @@ sandbox-landlock = ["dep:landlock"]
sandbox-bubblewrap = []
# Backward-compatible alias for older invocations
landlock = ["sandbox-landlock"]
# Prometheus metrics observer (requires 64-bit atomics; disable on 32-bit targets)
metrics = ["observability-prometheus"]
# probe = probe-rs for Nucleo memory read (adds ~50 deps; optional)
probe = ["dep:probe-rs"]
# rag-pdf = PDF ingestion for datasheet RAG
@@ -220,6 +228,11 @@ inherits = "release"
codegen-units = 8 # Parallel codegen for faster builds on powerful machines (16GB+ RAM recommended)
# Use: cargo build --profile release-fast
[profile.ci]
inherits = "release"
lto = "thin" # Much faster than fat LTO; still catches release-mode issues
codegen-units = 16 # Full parallelism for CI runners
[profile.dist]
inherits = "release"
opt-level = "z"
@@ -229,11 +242,27 @@ strip = true
panic = "abort"
[dev-dependencies]
tempfile = "3.14"
tempfile = "3.26"
criterion = { version = "0.8", features = ["async_tokio"] }
wiremock = "0.6"
scopeguard = "1.2"
[[test]]
name = "component"
path = "tests/test_component.rs"
[[test]]
name = "integration"
path = "tests/test_integration.rs"
[[test]]
name = "system"
path = "tests/test_system.rs"
[[test]]
name = "live"
path = "tests/test_live.rs"
[[bench]]
name = "agent_benchmarks"
harness = false
+17 -13
View File
@@ -58,20 +58,20 @@ RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/regist
# Prepare runtime directory structure and default config inline (no extra stage)
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
cat > /zeroclaw-data/.zeroclaw/config.toml <<EOF && \
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> /zeroclaw-data/.zeroclaw/config.toml && \
chown -R 65534:65534 /zeroclaw-data
workspace_dir = "/zeroclaw-data/workspace"
config_path = "/zeroclaw-data/.zeroclaw/config.toml"
api_key = ""
default_provider = "openrouter"
default_model = "anthropic/claude-sonnet-4-20250514"
default_temperature = 0.7
[gateway]
port = 42617
host = "[::]"
allow_public_bind = true
EOF
# ── Stage 2: Development Runtime (Debian) ────────────────────
FROM debian:trixie-slim@sha256:f6e2cfac5cf956ea044b4bd75e6397b4372ad88fe00908045e9a0d21712ae3ba AS dev
@@ -90,6 +90,8 @@ COPY dev/config.template.toml /zeroclaw-data/.zeroclaw/config.toml
RUN chown 65534:65534 /zeroclaw-data/.zeroclaw/config.toml
# Environment setup
# Ensure UTF-8 locale so CJK / multibyte input is handled correctly
ENV LANG=C.UTF-8
# Use consistent workspace path
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
@@ -114,6 +116,8 @@ COPY --from=builder /app/zeroclaw /usr/local/bin/zeroclaw
COPY --from=builder /zeroclaw-data /zeroclaw-data
# Environment setup
# Ensure UTF-8 locale so CJK / multibyte input is handled correctly
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
# Default provider and model are set in config.toml, not here,
+120
View File
@@ -0,0 +1,120 @@
# syntax=docker/dockerfile:1.7
# Dockerfile.debian — Shell-equipped variant of the ZeroClaw container.
#
# The default Dockerfile produces a distroless "release" image with no shell,
# which is ideal for minimal attack surface but prevents the agent from using
# shell-based tools (pwd, ls, git, curl, etc.).
#
# This variant uses debian:bookworm-slim as the runtime base and ships
# essential CLI tools so the agent can operate as a full coding assistant.
#
# Build:
# docker build -f Dockerfile.debian -t zeroclaw:debian .
#
# Or with docker compose:
# docker compose -f docker-compose.yml -f docker-compose.debian.yml up
# ── Stage 1: Build (identical to main Dockerfile) ───────────
FROM rust:1.93-slim@sha256:9663b80a1621253d30b146454f903de48f0af925c967be48c84745537cd35d8b AS builder
WORKDIR /app
# Install build dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
# 1. Copy manifests to cache dependencies
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
&& echo "fn main() {}" > src/main.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
COPY benches/ benches/
COPY crates/ crates/
COPY firmware/ firmware/
COPY web/ web/
# Keep release builds resilient when frontend dist assets are not prebuilt in Git.
RUN mkdir -p web/dist && \
if [ ! -f web/dist/index.html ]; then \
printf '%s\n' \
'<!doctype html>' \
'<html lang="en">' \
' <head>' \
' <meta charset="utf-8" />' \
' <meta name="viewport" content="width=device-width,initial-scale=1" />' \
' <title>ZeroClaw Dashboard</title>' \
' </head>' \
' <body>' \
' <h1>ZeroClaw Dashboard Unavailable</h1>' \
' <p>Frontend assets are not bundled in this build. Build the web UI to populate <code>web/dist</code>.</p>' \
' </body>' \
'</html>' > web/dist/index.html; \
fi
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
# Prepare runtime directory structure and default config inline (no extra stage)
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> /zeroclaw-data/.zeroclaw/config.toml && \
chown -R 65534:65534 /zeroclaw-data
# ── Stage 2: Runtime (Debian with shell) ─────────────────────
FROM debian:bookworm-slim AS runtime
# Install essential tools for agent shell operations
RUN apt-get update && apt-get install -y --no-install-recommends \
bash \
ca-certificates \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/zeroclaw /usr/local/bin/zeroclaw
COPY --from=builder /zeroclaw-data /zeroclaw-data
# Environment setup
# Ensure UTF-8 locale so CJK / multibyte input is handled correctly
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
# Default provider and model are set in config.toml, not here,
# so config file edits are not silently overridden
ENV ZEROCLAW_GATEWAY_PORT=42617
# API_KEY must be provided at runtime!
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+1 -1
View File
@@ -17,7 +17,7 @@ License
This software is available under a dual-license model:
1. MIT License — see LICENSE
1. MIT License — see LICENSE-MIT
2. Apache License 2.0 — see LICENSE-APACHE
You may use either license. Contributors grant rights under both.
+914
View File
@@ -0,0 +1,914 @@
<p align="center" dir="rtl">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center" dir="rtl">
<strong>صفر عبء. صفر تنازلات. 100% Rust. 100% محايد.</strong><br>
<strong dir="ltr">⚡️ يعمل على أجهزة بقيمة $10 بأقل من 5MB RAM: ذاكرة أقل بنسبة 99% من OpenClaw وأرخص بنسبة 98% من Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center" dir="rtl">
بني من قبل طلاب وأعضاء مجتمعات هارفارد ومعهد ماساتشوستس للتكنولوجيا وSundai.Club.
</p>
<p align="center" dir="rtl">
🌐 <strong>اللغات:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
<p align="center" dir="rtl">
<a href="#البدء-السريع">البدء السريع</a> |
<a href="bootstrap.sh">الإعداد بنقرة واحدة</a> |
<a href="docs/README.md">مركز التوثيق</a> |
<a href="docs/SUMMARY.md">فهرس التوثيق</a>
</p>
<p align="center" dir="rtl">
<strong>الوصول السريع:</strong>
<a href="docs/reference/README.md">المرجع</a> ·
<a href="docs/operations/README.md">العمليات</a> ·
<a href="docs/troubleshooting.md">استكشاف الأخطاء</a> ·
<a href="docs/security/README.md">الأمان</a> ·
<a href="docs/hardware/README.md">الأجهزة</a> ·
<a href="docs/contributing/README.md">المساهمة</a>
</p>
<p align="center" dir="rtl">
<strong>بنية تحتية سريعة وخفيفة ومستقلة تمامًا لمساعد الذكاء الاصطناعي</strong><br />
انشر في أي مكان. استبدل أي شيء.
</p>
<p align="center" dir="rtl">
ZeroClaw هو <strong>نظام تشغيل وقت التشغيل</strong> لعمليات العمل الآلية — بنية تحتية تجرد النماذج والأدوات والذاكرة والتنفيذ لبناء وكلاء مرة واحدة وتشغيلهم في أي مكان.
</p>
<p align="center"><code>بنية قائمة على السمات · وقت تشغيل آمن افتراضيًا · موفر/قناة/أداة قابلة للتبديل · كل شيء قابل للتوصيل</code></p>
### 📢 الإعلانات
استخدم هذا الجدول للإشعارات المهمة (تغييرات التوافق، إشعارات الأمان، نوافذ الصيانة، وحجوز الإصدارات).
| التاريخ (UTC) | المستوى | الإشعار | الإجراء |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _حرج_ | **نحن غير مرتبطين** بـ `openagen/zeroclaw` أو `zeroclaw.org`. نطاق `zeroclaw.org` يشير حاليًا إلى الفرع `openagen/zeroclaw`، وهذا النطاق/المستودع ينتحل شخصية موقعنا/مشروعنا الرسمي. | لا تثق بالمعلومات أو الملفات الثنائية أو جمع التبرعات أو الإعلانات من هذه المصادر. استخدم فقط [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) وحساباتنا الموثقة على وسائل التواصل الاجتماعي. |
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclaw)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
| 2026-02-19 | _مهم_ | قامت Anthropic بتحديث شروط استخدام المصادقة وبيانات الاعتماد في 2026-02-19. مصادقة OAuth (Free، Pro، Max) حصريًا لـ Claude Code و Claude.ai؛ استخدام رموز Claude Free/Pro/Max OAuth في أي منتج أو أداة أو خدمة أخرى (بما في ذلك Agent SDK) غير مسموح به وقد ينتهك شروط استخدام المستهلك. | يرجى تجنب مؤقتًا تكاملات Claude Code OAuth لمنع أي خسارة محتملة. البند الأصلي: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ الميزات
- 🏎️ **وقت تشغيل خفيف افتراضيًا:** عمليات سطر الأوامر الشائعة وأوامر الحالة تعمل ضمن مساحة ذاكرة بضع ميغابايت في إصدارات الإنتاج.
- 💰 **نشر فعال من حيث التكلفة:** مصمم للوحات منخفضة التكلفة وحالات السحابة الصغيرة بدون تبعيات وقت تشغيل ثقيلة.
- ⚡ **بدء تشغيل سريع من البارد:** وقت تشغيل Rust الثنائي الواحد يحافظ على بدء الأوامر والبرامج الخلفية شبه فوري للعمليات اليومية.
- 🌍 **بنية محمولة:** سير عمل ثنائي واحد على ARM و x86 و RISC-V مع موفر/قناة/أداة قابلة للتبديل.
### لماذا تختار الفرق ZeroClaw
- **خفيف افتراضيًا:** ملف Rust ثنائي صغير، بدء تشغيل سريع، بصمة ذاكرة منخفضة.
- **آمن بالتصميم:** الاقتران، الصندوق الرملي الصارم، قوائم السماح الصريحة، نطاق مساحة العمل.
- **قابل للتبديل بالكامل:** الأنظمة الأساسية هي سمات (الموفرون، القنوات، الأدوات، الذاكرة، الأنفاق).
- **لا قفل للمورد:** دعم موفر متوافق مع OpenAI + نقاط نهاية مخصصة قابلة للتوصيل.
## لقطة قياس الأداء (ZeroClaw مقابل OpenClaw، قابلة للتكرار)
قياس أداء سريع على جهاز محلي (macOS arm64، فبراير 2026) مُطبع لأجهزة الحافة بسرعة 0.8 GHz.
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
| **اللغة** | TypeScript | Python | Go | **Rust** |
| **الذاكرة العشوائية** | > 1 غيغابايت | > 100 ميغابايت | < 10 ميغابايت | **< 5 ميغابايت** |
| **بدء التشغيل (نواة 0.8 GHz)** | > 500 ثانية | > 30 ثانية | < 1 ثانية | **< 10 ملي ثانية** |
| **حجم الملف الثنائي** | ~28 ميغابايت (dist) | N/A (Scripts) | ~8 ميغابايت | **3.4 ميغابايت** |
| **التكلفة** | Mac Mini $599 | Linux SBC ~$50 | لوحة Linux $10 | **أي جهاز $10** |
> ملاحظات: تم قياس نتائج ZeroClaw في إصدارات الإنتاج باستخدام `/usr/bin/time -l`. يتطلب OpenClaw وقت تشغيل Node.js (عادةً ~390 ميغابايت من عبء الذاكرة الإضافي)، بينما يتطلب NanoBot وقت تشغيل Python. PicoClaw و ZeroClaw هما ملفات ثنائية ثابتة. أرقام الذاكرة العشوائية أعلاه هي ذاكرة وقت التشغيل؛ متطلبات التجميع في وقت البناء أعلى.
<p align="center">
<img src="zero-claw.jpeg" alt="مقارنة ZeroClaw مقابل OpenClaw" width="800" />
</p>
### قياس محلي قابل للتكرار
قد تتغير ادعاءات قياس الأداء مع تطور الكود وسلاسل الأدوات، لذا قم دائمًا بقياس إصدارك الحالي محليًا:
```bash
cargo build --release
ls -lh target/release/zeroclaw
/usr/bin/time -l target/release/zeroclaw --help
/usr/bin/time -l target/release/zeroclaw status
```
عينة مثال (macOS arm64، تم قياسها في 18 فبراير 2026):
- حجم الملف الثنائي للإصدار: `8.8M`
- `zeroclaw --help`: وقت حقيقي حوالي `0.02s`، بصمة ذاكرة قصوى ~`3.9 ميغابايت`
- `zeroclaw status`: وقت حقيقي حوالي `0.01s`، بصمة ذاكرة قصوى ~`4.1 ميغابايت`
## المتطلبات الأساسية
<details>
<summary><strong>Windows</strong></summary>
### Windows — مطلوب
1. **Visual Studio Build Tools** (يوفر رابط MSVC و Windows SDK):
```powershell
winget install Microsoft.VisualStudio.2022.BuildTools
```
أثناء التثبيت (أو عبر Visual Studio Installer)، حدد عبء عمل **"تطوير سطح المكتب باستخدام C++"**.
2. **سلسلة أدوات Rust:**
```powershell
winget install Rustlang.Rustup
```
بعد التثبيت، افتح محطة طرفية جديدة وقم بتشغيل `rustup default stable` للتأكد من أن سلسلة الأدوات المستقرة نشطة.
3. **تحقق** من أن كلاهما يعمل:
```powershell
rustc --version
cargo --version
```
### Windows — اختياري
- **Docker Desktop** — مطلوب فقط إذا كنت تستخدم [وقت تشغيل Docker المعزول](#دعم-وقت-التشغيل-الحالي) (`runtime.kind = "docker"`). قم بالتثبيت عبر `winget install Docker.DockerDesktop`.
</details>
<details>
<summary><strong>Linux / macOS</strong></summary>
### Linux / macOS — مطلوب
1. **أدوات البناء الأساسية:**
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
- **macOS:** قم بتثبيت Xcode Command Line Tools: `xcode-select --install`
2. **سلسلة أدوات Rust:**
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
راجع [rustup.rs](https://rustup.rs) للتفاصيل.
3. **تحقق:**
```bash
rustc --version
cargo --version
```
### Linux / macOS — اختياري
- **Docker** — مطلوب فقط إذا كنت تستخدم [وقت تشغيل Docker المعزول](#دعم-وقت-التشغيل-الحالي) (`runtime.kind = "docker"`).
- **Linux (Debian/Ubuntu):** راجع [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
- **Linux (Fedora/RHEL):** راجع [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
- **macOS:** قم بتثبيت Docker Desktop عبر [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
</details>
## البدء السريع
### الخيار 1: الإعداد الآلي (موصى به)
يقوم نص `bootstrap.sh` بتثبيت Rust ونسخ ZeroClaw وتجميعه وإعداد بيئة التطوير الأولية الخاصة بك:
```bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
```
سيقوم هذا بـ:
1. تثبيت Rust (إذا لم يكن موجودًا)
2. نسخ مستودع ZeroClaw
3. تجميع ZeroClaw في وضع الإصدار
4. تثبيت `zeroclaw` في `~/.cargo/bin/`
5. إنشاء هيكل مساحة العمل الافتراضية في `~/.zeroclaw/workspace/`
6. إنشاء ملف تكوين بدء التشغيل `~/.zeroclaw/workspace/config.toml`
بعد التمهيد، أعد تحميل shell الخاص بك أو قم بتشغيل `source ~/.cargo/env` لاستخدام أمر `zeroclaw` عالميًا.
### الخيار 2: التثبيت اليدوي
<details>
<summary><strong>انقر لرؤية خطوات التثبيت اليدوي</strong></summary>
```bash
# 1. نسخ المستودع
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# 2. التجميع في وضع الإصدار
cargo build --release --locked
# 3. تثبيت الملف الثنائي
cargo install --path . --locked
# 4. تهيئة مساحة العمل
zeroclaw init
# 5. التحقق من التثبيت
zeroclaw --version
zeroclaw status
```
</details>
### بعد التثبيت
بمجرد التثبيت (عبر التمهيد أو يدويًا)، يجب أن ترى:
```
~/.zeroclaw/workspace/
├── config.toml # التكوين الرئيسي
├── .pairing # أسرار الاقتران (تُنشأ عند التشغيل الأول)
├── logs/ # سجلات البرنامج الخفي/الوكيل
├── skills/ # المهارات المخصصة
└── memory/ # تخزين سياق المحادثة
```
**الخطوات التالية:**
1. قم بتكوين موفري الذكاء الاصطناعي الخاص بك في `~/.zeroclaw/workspace/config.toml`
2. تحقق من [مرجع التكوين](docs/config-reference.md) للخيارات المتقدمة
3. ابدأ الوكيل: `zeroclaw agent start`
4. اختبر عبر قناتك المفضلة (راجع [مرجع القنوات](docs/channels-reference.md))
## التكوين
قم بتحرير `~/.zeroclaw/workspace/config.toml` لتكوين الموفرون والقنوات وسلوك النظام.
### مرجع التكوين السريع
```toml
[providers.anthropic]
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
api_key = "sk-..."
model = "gpt-4o"
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@bot:matrix.org"
password = "..."
[memory]
kind = "markdown" # أو "sqlite" أو "none"
[runtime]
kind = "native" # أو "docker" (يتطلب Docker)
```
**مستندات المرجع الكاملة:**
- [مرجع التكوين](docs/config-reference.md) — جميع الإعدادات والتحقق والقيم الافتراضية
- [مرجع الموفرون](docs/providers-reference.md) — تكوينات محددة لموفري الذكاء الاصطناعي
- [مرجع القنوات](docs/channels-reference.md) — Telegram و Matrix و Slack و Discord والمزيد
- [العمليات](docs/operations-runbook.md) — المراقبة في الإنتاج وتدوير الأسرار والتوسع
### دعم وقت التشغيل الحالي
يدعم ZeroClaw واجهتين خلفيتين لتنفيذ الكود:
- **`native`** (افتراضي) — تنفيذ العملية المباشر، المسار الأسرع، مثالي للبيئات الموثوقة
- **`docker`** — عزل الحاوية الكامل، سياسات الأمان المحصنة، يتطلب Docker
استخدم `runtime.kind = "docker"` إذا كنت بحاجة إلى صندوق رملي صارم أو عزل الشبكة. راجع [مرجع التكوين](docs/config-reference.md#runtime) للتفاصيل الكاملة.
## الأوامر
```bash
# إدارة مساحة العمل
zeroclaw init # تهيئة مساحة عمل جديدة
zeroclaw status # عرض حالة البرنامج الخفي/الوكيل
zeroclaw config validate # التحقق من بنية وقيم config.toml
# إدارة البرنامج الخفي
zeroclaw daemon start # بدء البرنامج الخفي في الخلفية
zeroclaw daemon stop # إيقاف البرنامج الخفي قيد التشغيل
zeroclaw daemon restart # إعادة تشغيل البرنامج الخفي (إعادة تحميل التكوين)
zeroclaw daemon logs # عرض سجلات البرنامج الخفي
# إدارة الوكيل
zeroclaw agent start # بدء الوكيل (يتطلب تشغيل البرنامج الخفي)
zeroclaw agent stop # إيقاف الوكيل
zeroclaw agent restart # إعادة تشغيل الوكيل (إعادة تحميل التكوين)
# عمليات الاقتران
zeroclaw pairing init # إنشاء سر اقتران جديد
zeroclaw pairing rotate # تدوير سر الاقتران الحالي
# الأنفاق (للتعرض العام)
zeroclaw tunnel start # بدء نفق إلى البرنامج الخفي المحلي
zeroclaw tunnel stop # إيقاف النفق النشط
# التشخيص
zeroclaw doctor # تشغيل فحوصات صحة النظام
zeroclaw version # عرض الإصدار ومعلومات البناء
```
راجع [مرجع الأوامر](docs/commands-reference.md) للخيارات والأمثلة الكاملة.
## البنية
```
┌─────────────────────────────────────────────────────────────────┐
│ القنوات (سمة) │
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
└─────────────────────────┬───────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ منسق الوكيل │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ توجيه │ │ السياق │ │ التنفيذ │ │
│ │ الرسائل │ │ الذاكرة │ │ الأداة │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ الموفرون │ │ الذاكرة │ │ الأدوات │
│ (سمة) │ │ (سمة) │ │ (سمة) │
├──────────────┤ ├──────────────┤ ├──────────────┤
│ Anthropic │ │ Markdown │ │ Filesystem │
│ OpenAI │ │ SQLite │ │ Bash │
│ Gemini │ │ None │ │ Web Fetch │
│ Ollama │ │ Custom │ │ Custom │
│ Custom │ └──────────────┘ └──────────────┘
└──────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ وقت التشغيل (سمة) │
│ Native │ Docker │
└─────────────────────────────────────────────────────────────────┘
```
**المبادئ الأساسية:**
- كل شيء هو **سمة** — الموفرون والقنوات والأدوات والذاكرة والأنفاق
- القنوات تستدعي المنسق؛ المنسق يستدعي الموفرون + الأدوات
- نظام الذاكرة يدير سياق المحادثة (markdown أو SQLite أو لا شيء)
- وقت التشغيل يجرد تنفيذ الكود (أصلي أو Docker)
- لا قفل للمورد — استبدل Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama بدون تغييرات في الكود
راجع [توثيق البنية](docs/architecture.svg) للرسوم البيانية التفصيلية وتفاصيل التنفيذ.
## الأمثلة
### بوت Telegram
```toml
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
allowed_users = [987654321] # معرف مستخدم Telegram الخاص بك
```
ابدأ البرنامج الخفي + الوكيل، ثم أرسل رسالة إلى بوتك على Telegram:
```
/start
مرحباً! هل يمكنك مساعدتي في كتابة نص Python؟
```
يستجيب البوت بكود مُنشأ بالذكاء الاصطناعي، وينفذ الأدوات إذا طُلب، ويحافظ على سياق المحادثة.
### Matrix (تشفير من طرف إلى طرف)
```toml
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@zeroclaw:matrix.org"
password = "..."
device_name = "zeroclaw-prod"
e2ee_enabled = true
```
ادعُ `@zeroclaw:matrix.org` إلى غرفة مشفرة، وسيستجيب البوت بتشفير كامل. راجع [دليل Matrix E2EE](docs/matrix-e2ee-guide.md) لإعداد التحقق من الجهاز.
### متعدد الموفرون
```toml
[providers.anthropic]
enabled = true
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
enabled = true
api_key = "sk-..."
model = "gpt-4o"
[orchestrator]
default_provider = "anthropic"
fallback_providers = ["openai"] # التبديل عند خطأ المورد
```
إذا فشل Anthropic أو وصل إلى حد السرعة، يتبادل المنسق تلقائيًا إلى OpenAI.
### ذاكرة مخصصة
```toml
[memory]
kind = "sqlite"
path = "~/.zeroclaw/workspace/memory/conversations.db"
retention_days = 90 # حذف تلقائي بعد 90 يومًا
```
أو استخدم Markdown للتخزين القابل للقراءة البشرية:
```toml
[memory]
kind = "markdown"
path = "~/.zeroclaw/workspace/memory/"
```
راجع [مرجع التكوين](docs/config-reference.md#memory) لجميع خيارات الذاكرة.
## دعم الموفرون
| المورد | الحالة | مفتاح API | النماذج المثال |
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
| **Anthropic** | ✅ مستقر | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
| **OpenAI** | ✅ مستقر | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
| **Google Gemini** | ✅ مستقر | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
| **Ollama** | ✅ مستقر | N/A (محلي) | `llama3.3`, `qwen2.5`, `phi4` |
| **Cerebras** | ✅ مستقر | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
| **Groq** | ✅ مستقر | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
| **Mistral** | 🚧 مخطط | `MISTRAL_API_KEY` | TBD |
| **Cohere** | 🚧 مخطط | `COHERE_API_KEY` | TBD |
### نقاط النهاية المخصصة
يدعم ZeroClaw نقاط النهاية المتوافقة مع OpenAI:
```toml
[providers.custom]
enabled = true
api_key = "..."
base_url = "https://api.your-llm-provider.com/v1"
model = "your-model-name"
```
مثال: استخدم [LiteLLM](https://github.com/BerriAI/litellm) كوكيل للوصول إلى أي LLM عبر واجهة OpenAI.
راجع [مرجع الموفرون](docs/providers-reference.md) لتفاصيل التكوين الكاملة.
## دعم القنوات
| القناة | الحالة | المصادقة | ملاحظات |
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
| **Telegram** | ✅ مستقر | رمز البوت | دعم كامل بما في ذلك الملفات والصور والأزرار المضمنة |
| **Matrix** | ✅ مستقر | كلمة المرور أو الرمز | دعم E2EE مع التحقق من الجهاز |
| **Slack** | 🚧 مخطط | OAuth أو رمز البوت | يتطلب الوصول إلى مساحة العمل |
| **Discord** | 🚧 مخطط | رمز البوت | يتطلب أذونات النقابة |
| **WhatsApp** | 🚧 مخطط | Twilio أو API الرسمية | يتطلب حساب تجاري |
| **CLI** | ✅ مستقر | لا شيء | واجهة محادثة مباشرة |
| **Web** | 🚧 مخطط | مفتاح API أو OAuth | واجهة دردشة قائمة على المتصفح |
راجع [مرجع القنوات](docs/channels-reference.md) لتعليمات التكوين الكاملة.
## دعم الأدوات
يوفر ZeroClaw أدوات مدمجة لتنفيذ الكود والوصول إلى نظام الملفات واسترجاع الويب:
| الأداة | الوصف | وقت التشغيل المطلوب |
| -------------------- | --------------------------- | ----------------------------- |
| **bash** | ينفذ أوامر الصدفة | أصلي أو Docker |
| **python** | ينفذ نصوص Python | Python 3.8+ (أصلي) أو Docker |
| **javascript** | ينفذ كود Node.js | Node.js 18+ (أصلي) أو Docker |
| **filesystem_read** | يقرأ الملفات | أصلي أو Docker |
| **filesystem_write** | يكتب الملفات | أصلي أو Docker |
| **web_fetch** | يجلب محتوى الويب | أصلي أو Docker |
### أمان التنفيذ
- **وقت التشغيل الأصلي** — يعمل كعملية مستخدم البرنامج الخفي، وصول كامل لنظام الملفات
- **وقت تشغيل Docker** — عزل حاوية كامل، أنظمة ملفات وشبكات منفصلة
قم بتكوين سياسة التنفيذ في `config.toml`:
```toml
[runtime]
kind = "docker"
allowed_tools = ["bash", "python", "filesystem_read"] # قائمة سماح صريحة
```
راجع [مرجع التكوين](docs/config-reference.md#runtime) لخيارات الأمان الكاملة.
## النشر
### النشر المحلي (التطوير)
```bash
zeroclaw daemon start
zeroclaw agent start
```
### نشر الخادم (الإنتاج)
استخدم systemd لإدارة البرنامج الخفي والوكيل كخدمات:
```bash
# تثبيت الملف الثنائي
cargo install --path . --locked
# تكوين مساحة العمل
zeroclaw init
# إنشاء ملفات خدمة systemd
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
# تمكين وبدء الخدمات
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
sudo systemctl start zeroclaw-daemon zeroclaw-agent
# التحقق من الحالة
sudo systemctl status zeroclaw-daemon
sudo systemctl status zeroclaw-agent
```
راجع [دليل نشر الشبكة](docs/network-deployment.md) لتعليمات نشر الإنتاج الكاملة.
### Docker
```bash
# بناء الصورة
docker build -t zeroclaw:latest .
# تشغيل الحاوية
docker run -d \
--name zeroclaw \
-v ~/.zeroclaw/workspace:/workspace \
-e ANTHROPIC_API_KEY=sk-ant-... \
zeroclaw:latest
```
راجع [`Dockerfile`](Dockerfile) لتفاصيل البناء وخيارات التكوين.
### أجهزة الحافة
تم تصميم ZeroClaw للعمل على أجهزة منخفضة الطاقة:
- **Raspberry Pi Zero 2 W** — ~512 ميغابايت ذاكرة عشوائية، نواة ARMv8 واحدة، < $5 تكلفة الأجهزة
- **Raspberry Pi 4/5** — 1 غيغابايت+ ذاكرة عشوائية، متعدد النوى، مثالي لأحمال العمل المتزامنة
- **Orange Pi Zero 2** — ~512 ميغابايت ذاكرة عشوائية، رباعي النواة ARMv8، تكلفة منخفضة جدًا
- **أجهزة SBCs x86 (Intel N100)** — 4-8 غيغابايت ذاكرة عشوائية، بناء سريع، دعم Docker أصلي
راجع [دليل الأجهزة](docs/hardware/README.md) لتعليمات الإعداد الخاصة بالجهاز.
## الأنفاق (التعرض العام)
اعرض البرنامج الخفي ZeroClaw المحلي الخاص بك للشبكة العامة عبر أنفاق آمنة:
```bash
zeroclaw tunnel start --provider cloudflare
```
موفرو الأنفاق المدعومون:
- **Cloudflare Tunnel** — HTTPS مجاني، لا تعرض للمنافذ، دعم متعدد المجالات
- **Ngrok** — إعداد سريع، مجالات مخصصة (خطة مدفوعة)
- **Tailscale** — شبكة شبكية خاصة، لا منفذ عام
راجع [مرجع التكوين](docs/config-reference.md#tunnel) لخيارات التكوين الكاملة.
## الأمان
ينفذ ZeroClaw طبقات متعددة من الأمان:
### الاقتران
يُنشئ البرنامج الخفي سر اقتران عند التشغيل الأول مخزن في `~/.zeroclaw/workspace/.pairing`. يجب على العملاء (الوكيل، CLI) تقديم هذا السر للاتصال.
```bash
zeroclaw pairing rotate # يُنشئ سرًا جديدًا ويبطل القديم
```
### الصندوق الرملي
- **وقت تشغيل Docker** — عزل حاوية كامل مع أنظمة ملفات وشبكات منفصلة
- **وقت التشغيل الأصلي** — يعمل كعملية مستخدم، محدد النطاق في مساحة العمل افتراضيًا
### قوائم السماح
يمكن للقنوات تقييد الوصول حسب معرف المستخدم:
```toml
[channels.telegram]
enabled = true
allowed_users = [123456789, 987654321] # قائمة سماح صريحة
```
### التشفير
- **Matrix E2EE** — تشفير من طرف إلى طرف كامل مع التحقق من الجهاز
- **نقل TLS** — جميع حركة API والنفق تستخدم HTTPS/TLS
راجع [توثيق الأمان](docs/security/README.md) للسياسات والممارسات الكاملة.
## إمكانية الملاحظة
يسجل ZeroClaw في `~/.zeroclaw/workspace/logs/` افتراضيًا. يتم تخزين السجلات حسب المكون:
```
~/.zeroclaw/workspace/logs/
├── daemon.log # سجلات البرنامج الخفي (بدء التشغيل، طلبات API، الأخطاء)
├── agent.log # سجلات الوكيل (توجيه الرسائل، تنفيذ الأدوات)
├── telegram.log # سجلات خاصة بالقناة (إذا مُكنت)
└── matrix.log # سجلات خاصة بالقناة (إذا مُكنت)
```
### تكوين التسجيل
```toml
[logging]
level = "info" # debug، info، warn، error
path = "~/.zeroclaw/workspace/logs/"
rotation = "daily" # يومي، ساعي، حجم
max_size_mb = 100 # للتدوير القائم على الحجم
retention_days = 30 # حذف تلقائي بعد N يومًا
```
راجع [مرجع التكوين](docs/config-reference.md#logging) لجميع خيارات التسجيل.
### المقاييس (مخطط)
دعم مقاييس Prometheus لمراقبة الإنتاج قريبًا. التتبع في [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
## المهارات
يدعم ZeroClaw المهارات المخصصة — وحدات قابلة لإعادة الاستخدام توسع قدرات النظام.
### تعريف المهارة
يتم تخزين المهارات في `~/.zeroclaw/workspace/skills/<skill-name>/` بهذا الهيكل:
```
skills/
└── my-skill/
├── skill.toml # بيانات المهارة (الاسم، الوصف، التبعيات)
├── prompt.md # موجه النظام للذكاء الاصطناعي
└── tools/ # أدوات مخصصة اختيارية
└── my_tool.py
```
### مثال المهارة
```toml
# skills/web-research/skill.toml
[skill]
name = "web-research"
description = "يبحث في الويب ويلخص النتائج"
version = "1.0.0"
[dependencies]
tools = ["web_fetch", "bash"]
```
```markdown
<!-- skills/web-research/prompt.md -->
أنت مساعد بحث. عند طلب البحث عن شيء ما:
1. استخدم web_fetch لاسترجاع المحتوى
2. لخص النتائج بتنسيق سهل القراءة
3. استشهد بالمصادر مع عناوين URL
```
### استخدام المهارات
يتم تحميل المهارات تلقائيًا عند بدء تشغيل الوكيل. أشر إليها بالاسم في المحادثات:
```
المستخدم: استخدم مهارة البحث على الويب للعثور على أخبار الذكاء الاصطناعي الأخيرة
البوت: [يحمل مهارة البحث على الويب، ينفذ web_fetch، يلخص النتائج]
```
راجع قسم [المهارات](#المهارات) لتعليمات إنشاء المهارات الكاملة.
## المهارات المفتوحة
يدعم ZeroClaw [Open Skills](https://github.com/openagents-com/open-skills) — نظام معياري ومحايد للمورد لتوسيع قدرات وكلاء الذكاء الاصطناعي.
### تمكين المهارات المفتوحة
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # اختياري
```
يمكنك أيضًا التجاوز في وقت التشغيل باستخدام `ZEROCLAW_OPEN_SKILLS_ENABLED` و `ZEROCLAW_OPEN_SKILLS_DIR`.
## التطوير
```bash
cargo build # بناء التطوير
cargo build --release # بناء الإصدار (codegen-units=1، يعمل على جميع الأجهزة بما في ذلك Raspberry Pi)
cargo build --profile release-fast # بناء أسرع (codegen-units=8، يتطلب 16 غيغابايت+ ذاكرة عشوائية)
cargo test # تشغيل مجموعة الاختبار الكاملة
cargo clippy --locked --all-targets -- -D clippy::correctness
cargo fmt # تنسيق
# تشغيل معيار مقارنة SQLite مقابل Markdown
cargo test --test memory_comparison -- --nocapture
```
### خطاف ما قبل الدفع
يقوم خطاف git بتشغيل `cargo fmt --check` و `cargo clippy -- -D warnings` و `cargo test` قبل كل دفع. قم بتمكينه مرة واحدة:
```bash
git config core.hooksPath .githooks
```
### استكشاف أخطاء البناء وإصلاحها (أخطاء OpenSSL على Linux)
إذا واجهت خطأ بناء `openssl-sys`، قم بمزامنة التبعيات وأعد التجميع باستخدام ملف قفل المستودع:
```bash
git pull
cargo build --release --locked
cargo install --path . --force --locked
```
تم تكوين ZeroClaw لاستخدام `rustls` لتبعيات HTTP/TLS؛ `--locked` يحافظ على الرسم البياني العابر حتمي في البيئات النظيفة.
لتخطي الخطاف عندما تحتاج إلى دفع سريع أثناء التطوير:
```bash
git push --no-verify
```
## التعاون والتوثيق
ابدأ بمركز التوثيق لخريطة قائمة على المهام:
- مركز التوثيق: [`docs/README.md`](docs/README.md)
- فهرس التوثيق الموحد: [`docs/SUMMARY.md`](docs/SUMMARY.md)
- مرجع الأوامر: [`docs/commands-reference.md`](docs/commands-reference.md)
- مرجع التكوين: [`docs/config-reference.md`](docs/config-reference.md)
- مرجع الموفرون: [`docs/providers-reference.md`](docs/providers-reference.md)
- مرجع القنوات: [`docs/channels-reference.md`](docs/channels-reference.md)
- دليل العمليات: [`docs/operations-runbook.md`](docs/operations-runbook.md)
- استكشاف الأخطاء: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- مخزون/تصنيف التوثيق: [`docs/docs-inventory.md`](docs/docs-inventory.md)
- لقطة فرز PR/المشكلة (اعتبارًا من 18 فبراير 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
مراجع التعاون الرئيسية:
- مركز التوثيق: [docs/README.md](docs/README.md)
- قالب التوثيق: [docs/doc-template.md](docs/doc-template.md)
- قائمة تغيير التوثيق: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
- مرجع تكوين القنوات: [docs/channels-reference.md](docs/channels-reference.md)
- عمليات غرف Matrix المشفرة: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
- دليل المساهمة: [CONTRIBUTING.md](CONTRIBUTING.md)
- سياسة سير عمل PR: [docs/pr-workflow.md](docs/pr-workflow.md)
- دليل المراجع (الفرز + المراجعة العميقة): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
- خريطة الملكية وفرز CI: [docs/ci-map.md](docs/ci-map.md)
- سياسة الإفصاح الأمني: [SECURITY.md](SECURITY.md)
للنشر وعمليات وقت التشغيل:
- دليل نشر الشبكة: [docs/network-deployment.md](docs/network-deployment.md)
- دليل وكيل الوكيل: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
## دعم ZeroClaw
إذا كان ZeroClaw يساعد عملك وترغب في دعم التطوير المستمر، يمكنك التبرع هنا:
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="اشترِ لي قهوة" /></a>
### 🙏 شكر خاص
شكر خالص للمجتمعات والمؤسسات التي تلهم وتغذي هذا العمل مفتوح المصدر:
- **جامعة هارفارد** — لتعزيز الفضول الفكري ودفع حدود ما هو ممكن.
- **MIT** — للدفاع عن المعرفة المفتوحة والمصدر المفتوح والاعتقاد بأن التكنولوجيا يجب أن تكون متاحة للجميع.
- **Sundai Club** — للمجتمع والطاقة والإرادة الدؤوبة لبناء أشياء مهمة.
- **العالم وما بعده** 🌍✨ — لكل مساهم وحالم وباني هناك يجعل المصدر المفتوح قوة للخير. هذا من أجلك.
نحن نبني في المصدر المفتوح لأن أفضل الأفكار تأتي من كل مكان. إذا كنت تقرأ هذا، فأنت جزء منه. مرحبًا. 🦀❤️
## ⚠️ المستودع الرسمي وتحذير الانتحال
**هذا هو مستودع ZeroClaw الرسمي الوحيد:**
> <https://github.com/zeroclaw-labs/zeroclaw>
أي مستودع أو منظمة أو نطاق أو حزمة آخر يدعي أنه "ZeroClaw" أو يلمح إلى الارتباط بـ ZeroClaw Labs هو **غير مصرح به وغير مرتبط بهذا المشروع**. سيتم إدراج الفروع غير المصرح بها المعروفة في [TRADEMARK.md](TRADEMARK.md).
إذا واجهت انتحالًا أو سوء استخدام للعلامة التجارية، يرجى [فتح مشكلة](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## الترخيص
ZeroClaw مرخص بشكل مزدوج لأقصى قدر من الانفتاح وحماية المساهمين:
| الترخيص | حالات الاستخدام |
| ---------------------------- | ------------------------------------------------------------ |
| [MIT](LICENSE-MIT) | مفتوح المصدر، البحث، الأكاديمي، الاستخدام الشخصي |
| [Apache 2.0](LICENSE-APACHE) | حماية براءات الاختراع، المؤسسي، النشر التجاري |
يمكنك اختيار أي من الترخيصين. **يمنح المساهمون تلقائيًا حقوقًا بموجب كليهما** — راجع [CLA.md](CLA.md) لاتفاقية المساهم الكاملة.
### العلامة التجارية
اسم **ZeroClaw** والشعار علامتان تجاريتان مسجلتان لـ ZeroClaw Labs. لا يمنح هذا الترخيص الإذن باستخدامهما للإيحاء بالموافقة أو الارتباط. راجع [TRADEMARK.md](TRADEMARK.md) للاستخدامات المسموح بها والمحظورة.
### حماية المساهمين
- **تحتفظ بحقوق النشر** لمساهماتك
- **منح براءة الاختراع** (Apache 2.0) يحميك من مطالبات براءات الاختراع من مساهمين آخرين
- يتم **نسب مساهماتك بشكل دائم** في تاريخ الالتزامات و [NOTICE](NOTICE)
- لا يتم نقل حقوق العلامة التجارية من خلال المساهمة
## المساهمة
راجع [CONTRIBUTING.md](CONTRIBUTING.md) و [CLA.md](CLA.md). قم بتنفيذ سمة، أرسل PR:
- دليل سير عمل CI: [docs/ci-map.md](docs/ci-map.md)
- `Provider` جديد ← `src/providers/`
- `Channel` جديد ← `src/channels/`
- `Observer` جديد ← `src/observability/`
- `Tool` جديد ← `src/tools/`
- `Memory` جديدة ← `src/memory/`
- `Tunnel` جديد ← `src/tunnel/`
- `Skill` جديدة ← `~/.zeroclaw/workspace/skills/<n>/`
---
**ZeroClaw** — صفر عبء. صفر تنازلات. انشر في أي مكان. استبدل أي شيء. 🦀
## تاريخ النجوم
<p align="center">
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
<img alt="رسم بياني لتاريخ النجوم" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
</picture>
</a>
</p>
+179
View File
@@ -0,0 +1,179 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>শূন্য ওভারহেড। শূন্য আপস। 100% রাস্ট। 100% অজ্ঞেয়বাদী।</strong><br>
⚡️ <strong>$10 হার্ডওয়্যারে <5MB RAM নিয়ে চলে: এটি OpenClaw থেকে 99% কম মেমোরি এবং Mac mini থেকে 98% সস্তা!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center">
🌐 <strong>ভাষা:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
## ZeroClaw কী?
ZeroClaw হল একটি হালকা, মিউটেবল এবং এক্সটেনসিবল AI অ্যাসিস্ট্যান্ট ইনফ্রাস্ট্রাকচার যা রাস্টে তৈরি। এটি বিভিন্ন LLM প্রদানকারীদের (Anthropic, OpenAI, Google, Ollama, ইত্যাদি) একটি ইউনিফাইড ইন্টারফেসের মাধ্যমে সংযুক্ত করে এবং একাধিক চ্যানেল (Telegram, Matrix, CLI, ইত্যাদি) সমর্থন করে।
### মূল বৈশিষ্ট্যসমূহ
- **🦀 রাস্টে লেখা**: উচ্চ পারফরম্যান্স, মেমোরি নিরাপত্তা, এবং জিরো-কস্ট অ্যাবস্ট্রাকশন
- **🔌 প্রদানকারী-অজ্ঞেয়বাদী**: OpenAI, Anthropic, Google Gemini, Ollama, এবং অন্যান্য সমর্থন
- **📱 মাল্টি-চ্যানেল**: Telegram, Matrix (E2EE সহ), CLI, এবং অন্যান্য
- **🧠 প্লাগেবল মেমোরি**: SQLite এবং Markdown ব্যাকএন্ড
- **🛠️ এক্সটেন্সিবল টুলস**: সহজেই কাস্টম টুল যোগ করুন
- **🔒 নিরাপত্তা-প্রথম**: রিভার্স-প্রক্সি, গোপনীয়তা-প্রথম ডিজাইন
---
## দ্রুত শুরু
### প্রয়োজনীয়তা
- রাস্ট 1.70+
- একটি LLM প্রদানকারী API কী (Anthropic, OpenAI, ইত্যাদি)
### ইনস্টলেশন
```bash
# রিপোজিটরি ক্লোন করুন
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# বিল্ড করুন
cargo build --release
# চালান
cargo run --release
```
### Docker দিয়ে
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## কনফিগারেশন
ZeroClaw একটি YAML কনফিগারেশন ফাইল ব্যবহার করে। ডিফল্টরূপে, এটি `config.yaml` দেখে।
```yaml
# ডিফল্ট প্রদানকারী
provider: anthropic
# প্রদানকারী কনফিগারেশন
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# মেমোরি কনফিগারেশন
memory:
backend: sqlite
path: data/memory.db
# চ্যানেল কনফিগারেশন
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## ডকুমেন্টেশন
বিস্তারিত ডকুমেন্টেশনের জন্য, দেখুন:
- [ডকুমেন্টেশন হাব](docs/README.md)
- [কমান্ড রেফারেন্স](docs/commands-reference.md)
- [প্রদানকারী রেফারেন্স](docs/providers-reference.md)
- [চ্যানেল রেফারেন্স](docs/channels-reference.md)
- [কনফিগারেশন রেফারেন্স](docs/config-reference.md)
---
## অবদান
অবদান স্বাগত! অনুগ্রহ করে [অবদান গাইড](CONTRIBUTING.md) পড়ুন।
---
## লাইসেন্স
এই প্রজেক্টটি ডুয়াল লাইসেন্সপ্রাপ্ত:
- MIT লাইসেন্স
- Apache লাইসেন্স, সংস্করণ 2.0
বিস্তারিতের জন্য [LICENSE-APACHE](LICENSE-APACHE) এবং [LICENSE-MIT](LICENSE-MIT) দেখুন।
---
## কমিউনিটি
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## স্পনসর
যদি ZeroClaw আপনার জন্য উপযোগী হয়, তবে অনুগ্রহ করে আমাদের একটি কফি কিনতে বিবেচনা করুন:
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)
+914
View File
@@ -0,0 +1,914 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Nulová režie. Nulové kompromisy. 100% Rust. 100% Agnostický.</strong><br>
⚡️ <strong>Beží na hardwaru za $10 s <5MB RAM: To je o 99% méně paměti než OpenClaw a o 98% levnější než Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
Postaveno studenty a členy komunit Harvard, MIT a Sundai.Club.
</p>
<p align="center">
🌐 <strong>Jazyky:</strong><a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
<p align="center">
<a href="#rychlý-start">Rychlý Start</a> |
<a href="bootstrap.sh">Jedno-klikové nastavení</a> |
<a href="docs/README.md">Dokumentační Centrum</a> |
<a href="docs/SUMMARY.md">Obsah Dokumentace</a>
</p>
<p align="center">
<strong>Rychlý přístup:</strong>
<a href="docs/reference/README.md">Reference</a> ·
<a href="docs/operations/README.md">Operace</a> ·
<a href="docs/troubleshooting.md">Řešení problémů</a> ·
<a href="docs/security/README.md">Bezpečnost</a> ·
<a href="docs/hardware/README.md">Hardware</a> ·
<a href="docs/contributing/README.md">Příspívání</a>
</p>
<p align="center">
<strong>Rychlá, lehká a plně autonomní AI asistent infrastruktura</strong><br />
Nasazujte kdekoliv. Měňte cokoliv.
</p>
<p align="center">
ZeroClaw je <strong>operační systém runtime</strong> pro workflow agentů — infrastruktura která abstrahuje modely, nástroje, paměť a provádění pro stavbu agentů jednou a spouštění kdekoliv.
</p>
<p align="center"><code>Architektura založená na traitech · bezpečný runtime defaultně · vyměnitelný poskytovatel/kanál/nástroj · vše je připojitelné</code></p>
### 📢 Oznámení
Použijte tuto tabulku pro důležitá oznámení (změny kompatibility, bezpečnostní upozornění, servisní okna a blokování verzí).
| Datum (UTC) | Úroveň | Oznámení | Akce |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritické_ | **Nejsme propojeni** s `openagen/zeroclaw` nebo `zeroclaw.org`. Doména `zeroclaw.org` aktuálně směřuje na fork `openagen/zeroclaw`, a tato doména/repoziťář se vydává za náš oficiální web/projekt. | Nevěřte informacím, binárním souborům, fundraisingu nebo oznámením z těchto zdrojů. Používejte pouze [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) a naše ověřené sociální účty. |
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
| 2026-02-19 | _Důležité_ | Anthropic aktualizoval podmínky použití autentizace a přihlašovacích údajů dne 2026-02-19. OAuth autentizace (Free, Pro, Max) je výhradně pro Claude Code a Claude.ai; použití Claude Free/Pro/Max OAuth tokenů v jakémkoliv jiném produktu, nástroji nebo službě (včetně Agent SDK) není povoleno a může porušit Podmínky použití spotřebitele. | Prosím dočasně se vyhněte Claude Code OAuth integracím pro předcházení potenciálním ztrátám. Původní klauzule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funkce
- 🏎️ **Lehký Runtime Defaultně:** Běžné CLI workflowy a stavové příkazy běží v paměťovém prostoru několika megabytů v produkčních buildech.
- 💰 **Cenově efektivní nasazení:** Navrženo pro nízkonákladové desky a malé cloud instance bez těžkých runtime závislostí.
- ⚡ **Rychlé studené starty:** Single-binary Rust runtime udržuje start příkazů a daemonů téměř okamžitý pro denní operace.
- 🌍 **Přenosná architektura:** Single-binary workflow na ARM, x86 a RISC-V s vyměnitelným poskytovatelem/kanálem/nástrojem.
### Proč týmy volí ZeroClaw
- **Lehký defaultně:** malý Rust binary, rychlý start, nízká paměťová stopa.
- **Bezpečný designem:** párování, striktní sandboxing, explicitní allowlisty, workspace scope.
- **Plně vyměnitelné:** jádrové systémy jsou traity (poskytovatelé, kanály, nástroje, paměť, tunely).
- **Žádné vendor lock-in:** OpenAI-kompatibilní podpora poskytovatele + připojitelné vlastní endpointy.
## Benchmark Snapshot (ZeroClaw vs OpenClaw, Reprodukovatelné)
Rychlý benchmark na lokálním stroji (macOS arm64, únor 2026) normalizovaný pro 0.8 GHz edge hardware.
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
| **Jazyk** | TypeScript | Python | Go | **Rust** |
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
| **Start (0.8 GHz jádro)** | > 500s | > 30s | < 1s | **< 10ms** |
| **Velikost Binary** | ~28 MB (dist) | N/A (Skripty) | ~8 MB | **3.4 MB** |
| **Náklady** | Mac Mini $599 | Linux SBC ~$50 | Linux deska $10 | **Jakýkoliv hardware $10** |
> Poznámky: Výsledky ZeroClaw jsou měřeny na produkčních buildech pomocí `/usr/bin/time -l`. OpenClaw vyžaduje Node.js runtime (typicky ~390 MB dodatečného paměťového režijního nákladu), zatímco NanoBot vyžaduje Python runtime. PicoClaw a ZeroClaw jsou statická binaria. Výše uvedené RAM čísla jsou runtime paměť; build-time kompilační požadavky jsou vyšší.
<p align="center">
<img src="zero-claw.jpeg" alt="Porovnání ZeroClaw vs OpenClaw" width="800" />
</p>
### Reprodukovatelné lokální měření
Benchmark tvrzení se mohou měnit jak se kód a toolchainy vyvíjejí, takže vždy měřte svůj aktuální build lokálně:
```bash
cargo build --release
ls -lh target/release/zeroclaw
/usr/bin/time -l target/release/zeroclaw --help
/usr/bin/time -l target/release/zeroclaw status
```
Ukázková vzorka (macOS arm64, měřeno 18. února 2026):
- Velikost release binary: `8.8M`
- `zeroclaw --help`: reálný čas přibližně `0.02s`, špičková paměťová stopa ~`3.9 MB`
- `zeroclaw status`: reálný čas přibližně `0.01s`, špičková paměťová stopa ~`4.1 MB`
## Předpoklady
<details>
<summary><strong>Windows</strong></summary>
### Windows — Vyžadováno
1. **Visual Studio Build Tools** (poskytuje MSVC linker a Windows SDK):
```powershell
winget install Microsoft.VisualStudio.2022.BuildTools
```
Během instalace (nebo přes Visual Studio Installer), vyberte workload **"Desktop development with C++"**.
2. **Rust Toolchain:**
```powershell
winget install Rustlang.Rustup
```
Po instalaci otevřete nový terminál a spusťte `rustup default stable` pro zajištění, že stabilní toolchain je aktivní.
3. **Ověřte** že oba fungují:
```powershell
rustc --version
cargo --version
```
### Windows — Volitelné
- **Docker Desktop** — vyžadováno pouze pokud používáte [Docker sandboxed runtime](#aktuální-runtime-podpora) (`runtime.kind = "docker"`). Nainstalujte přes `winget install Docker.DockerDesktop`.
</details>
<details>
<summary><strong>Linux / macOS</strong></summary>
### Linux / macOS — Vyžadováno
1. **Essenciální build nástroje:**
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
- **macOS:** Nainstalujte Xcode Command Line Tools: `xcode-select --install`
2. **Rust Toolchain:**
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Viz [rustup.rs](https://rustup.rs) pro detaily.
3. **Ověřte:**
```bash
rustc --version
cargo --version
```
### Linux / macOS — Volitelné
- **Docker** — vyžadováno pouze pokud používáte [Docker sandboxed runtime](#aktuální-runtime-podpora) (`runtime.kind = "docker"`).
- **Linux (Debian/Ubuntu):** viz [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
- **Linux (Fedora/RHEL):** viz [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
- **macOS:** nainstalujte Docker Desktop přes [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
</details>
## Rychlý Start
### Možnost 1: Automatické nastavení (doporučeno)
Skript `bootstrap.sh` nainstaluje Rust, naklonuje ZeroClaw, zkompiluje ho a nastaví vaše počáteční vývojové prostředí:
```bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
```
Toto:
1. Nainstaluje Rust (pokud chybí)
2. Naklonuje ZeroClaw repoziťář
3. Zkompiluje ZeroClaw v release módu
4. Nainstaluje `zeroclaw` do `~/.cargo/bin/`
5. Vytvoří výchozí workspace strukturu v `~/.zeroclaw/workspace/`
6. Vygeneruje počáteční konfigurační soubor `~/.zeroclaw/workspace/config.toml`
Po bootstrapu znovu načtěte váš shell nebo spusťte `source ~/.cargo/env` pro použití příkazu `zeroclaw` globálně.
### Možnost 2: Manuální instalace
<details>
<summary><strong>Klikněte pro zobrazení kroků manuální instalace</strong></summary>
```bash
# 1. Naklonujte repoziťář
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# 2. Zkompilujte v release
cargo build --release --locked
# 3. Nainstalujte binary
cargo install --path . --locked
# 4. Inicializujte workspace
zeroclaw init
# 5. Ověřte instalaci
zeroclaw --version
zeroclaw status
```
</details>
### Po instalaci
Jakmile nainstalováno (přes bootstrap nebo manuálně), měli byste vidět:
```
~/.zeroclaw/workspace/
├── config.toml # Hlavní konfigurace
├── .pairing # Párovací tajemství (generováno při prvním spuštění)
├── logs/ # Daemon/agent logy
├── skills/ # Vlastní dovednosti
└── memory/ # Uložení konverzačního kontextu
```
**Další kroky:**
1. Nakonfigurujte své AI poskytovatele v `~/.zeroclaw/workspace/config.toml`
2. Podívejte se na [konfigurační referenci](docs/config-reference.md) pro pokročilé možnosti
3. Spusťte agenta: `zeroclaw agent start`
4. Otestujte přes váš preferovaný kanál (viz [kanálová reference](docs/channels-reference.md))
## Konfigurace
Upravte `~/.zeroclaw/workspace/config.toml` pro konfiguraci poskytovatelů, kanálů a chování systému.
### Rychlá konfigurační reference
```toml
[providers.anthropic]
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
api_key = "sk-..."
model = "gpt-4o"
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@bot:matrix.org"
password = "..."
[memory]
kind = "markdown" # nebo "sqlite" nebo "none"
[runtime]
kind = "native" # nebo "docker" (vyžaduje Docker)
```
**Kompletní referenční dokumenty:**
- [Konfigurační reference](docs/config-reference.md) — všechna nastavení, validace, výchozí hodnoty
- [Poskytovatel reference](docs/providers-reference.md) — AI poskytovatel-specifické konfigurace
- [Kanálová reference](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord a další
- [Operace](docs/operations-runbook.md) — produkční monitoring, rotace tajemství, škálování
### Aktuální Runtime Podpora
ZeroClaw podporuje dva backendy provádění kódu:
- **`native`** (výchozí) — přímé provedení procesu, nejrychlejší cesta, ideální pro důvěryhodná prostředí
- **`docker`** — plná kontejnerová izolace, zpřísněné bezpečnostní politiky, vyžaduje Docker
Použijte `runtime.kind = "docker"` pokud potřebujete striktní sandboxing nebo síťovou izolaci. Viz [konfigurační reference](docs/config-reference.md#runtime) pro úplné detaily.
## Příkazy
```bash
# Správa workspace
zeroclaw init # Inicializuje nový workspace
zeroclaw status # Zobrazuje stav daemon/agent
zeroclaw config validate # Ověřuje syntaxi a hodnoty config.toml
# Správa daemon
zeroclaw daemon start # Spouští daemon na pozadí
zeroclaw daemon stop # Zastavuje běžící daemon
zeroclaw daemon restart # Restartuje daemon (znovunačtení config)
zeroclaw daemon logs # Zobrazuje daemon logy
# Správa agent
zeroclaw agent start # Spouští agenta (vyžaduje běžící daemon)
zeroclaw agent stop # Zastavuje agenta
zeroclaw agent restart # Restartuje agenta (znovunačtení config)
# Párovací operace
zeroclaw pairing init # Generuje nové párovací tajemství
zeroclaw pairing rotate # Rotuje existující párovací tajemství
# Tunneling (pro veřejnou expozici)
zeroclaw tunnel start # Spouští tunnel k lokálnímu daemon
zeroclaw tunnel stop # Zastavuje aktivní tunnel
# Diagnostika
zeroclaw doctor # Spouští kontroly zdraví systému
zeroclaw version # Zobrazuje verzi a build informace
```
Viz [Příkazová reference](docs/commands-reference.md) pro kompletní možnosti a příklady.
## Architektura
```
┌─────────────────────────────────────────────────────────────────┐
│ Kanály (trait) │
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
└─────────────────────────┬───────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Agent Orchestrátor │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Směrování │ │ Kontext │ │ Provedení │ │
│ │ Zpráva │ │ Paměť │ │ Nástroj │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Poskytovatel│ │ Paměť │ │ Nástroje │
│ (trait) │ │ (trait) │ │ (trait) │
├──────────────┤ ├──────────────┤ ├──────────────┤
│ Anthropic │ │ Markdown │ │ Filesystem │
│ OpenAI │ │ SQLite │ │ Bash │
│ Gemini │ │ None │ │ Web Fetch │
│ Ollama │ │ Custom │ │ Custom │
│ Custom │ └──────────────┘ └──────────────┘
└──────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Runtime (trait) │
│ Native │ Docker │
└─────────────────────────────────────────────────────────────────┘
```
**Klíčové principy:**
- Vše je **trait** — poskytovatelé, kanály, nástroje, paměť, tunely
- Kanály volají orchestrátor; orchestrátor volá poskytovatele + nástroje
- Paměťový systém spravuje konverzační kontext (markdown, SQLite, nebo žádný)
- Runtime abstrahuje provádění kódu (nativní nebo Docker)
- Žádné vendor lock-in — vyměňujte Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama beze změn kódu
Viz [dokumentace architektury](docs/architecture.svg) pro detailní diagramy a detaily implementace.
## Příklady
### Telegram Bot
```toml
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
allowed_users = [987654321] # Vaše Telegram user ID
```
Spusťte daemon + agent, pak pošlete zprávu vašemu botovi na Telegram:
```
/start
Ahoj! Mohl bys mi pomoci napsat Python skript?
```
Bot odpoví AI-generovaným kódem, provede nástroje pokud požadováno a udržuje konverzační kontext.
### Matrix (end-to-end šifrování)
```toml
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@zeroclaw:matrix.org"
password = "..."
device_name = "zeroclaw-prod"
e2ee_enabled = true
```
Pozvěte `@zeroclaw:matrix.org` do šifrované místnosti a bot odpoví s plným šifrováním. Viz [Matrix E2EE Guide](docs/matrix-e2ee-guide.md) pro nastavení ověření zařízení.
### Multi-Poskytovatel
```toml
[providers.anthropic]
enabled = true
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
enabled = true
api_key = "sk-..."
model = "gpt-4o"
[orchestrator]
default_provider = "anthropic"
fallback_providers = ["openai"] # Failover při chybě poskytovatele
```
Pokud Anthropic selže nebo má rate-limit, orchestrátor automaticky přepne na OpenAI.
### Vlastní Paměť
```toml
[memory]
kind = "sqlite"
path = "~/.zeroclaw/workspace/memory/conversations.db"
retention_days = 90 # Automatické čištění po 90 dnech
```
Nebo použijte Markdown pro lidsky čitelné ukládání:
```toml
[memory]
kind = "markdown"
path = "~/.zeroclaw/workspace/memory/"
```
Viz [Konfigurační reference](docs/config-reference.md#memory) pro všechny možnosti paměti.
## Podpora Poskytovatelů
| Poskytovatel | Stav | API Klíč | Příklad Modelů |
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
| **Anthropic** | ✅ Stabilní | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
| **OpenAI** | ✅ Stabilní | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
| **Google Gemini** | ✅ Stabilní | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
| **Ollama** | ✅ Stabilní | N/A (lokální) | `llama3.3`, `qwen2.5`, `phi4` |
| **Cerebras** | ✅ Stabilní | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
| **Groq** | ✅ Stabilní | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
| **Mistral** | 🚧 Plánováno | `MISTRAL_API_KEY` | TBD |
| **Cohere** | 🚧 Plánováno | `COHERE_API_KEY` | TBD |
### Vlastní Endpointy
ZeroClaw podporuje OpenAI-kompatibilní endpointy:
```toml
[providers.custom]
enabled = true
api_key = "..."
base_url = "https://api.your-llm-provider.com/v1"
model = "your-model-name"
```
Příklad: použijte [LiteLLM](https://github.com/BerriAI/litellm) jako proxy pro přístup k jakémukoli LLM přes OpenAI rozhraní.
Viz [Poskytovatel reference](docs/providers-reference.md) pro kompletní detaily konfigurace.
## Podpora Kanálů
| Kanál | Stav | Autentizace | Poznámky |
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
| **Telegram** | ✅ Stabilní | Bot Token | Plná podpora včetně souborů, obrázků, inline tlačítek |
| **Matrix** | ✅ Stabilní | Heslo nebo Token | E2EE podpora s ověřením zařízení |
| **Slack** | 🚧 Plánováno | OAuth nebo Bot Token | Vyžaduje workspace přístup |
| **Discord** | 🚧 Plánováno | Bot Token | Vyžaduje guild oprávnění |
| **WhatsApp** | 🚧 Plánováno | Twilio nebo oficiální API | Vyžaduje business účet |
| **CLI** | ✅ Stabilní | Žádné | Přímé konverzační rozhraní |
| **Web** | 🚧 Plánováno | API Klíč nebo OAuth | Prohlížečové chat rozhraní |
Viz [Kanálová reference](docs/channels-reference.md) pro kompletní instrukce konfigurace.
## Podpora Nástrojů
ZeroClaw poskytuje vestavěné nástroje pro provádění kódu, přístup k souborovému systému a web retrieval:
| Nástroj | Popis | Vyžadovaný Runtime |
| -------------------- | --------------------------- | ----------------------------- |
| **bash** | Provádí shell příkazy | Nativní nebo Docker |
| **python** | Provádí Python skripty | Python 3.8+ (nativní) nebo Docker |
| **javascript** | Provádí Node.js kód | Node.js 18+ (nativní) nebo Docker |
| **filesystem_read** | Čte soubory | Nativní nebo Docker |
| **filesystem_write** | Zapisuje soubory | Nativní nebo Docker |
| **web_fetch** | Získává web obsah | Nativní nebo Docker |
### Bezpečnost Provedení
- **Nativní Runtime** — běží jako uživatelský proces daemon, plný přístup k souborovému systému
- **Docker Runtime** — plná kontejnerová izolace, oddělené souborové systémy a sítě
Nakonfigurujte politiku provedení v `config.toml`:
```toml
[runtime]
kind = "docker"
allowed_tools = ["bash", "python", "filesystem_read"] # Explicitní allowlist
```
Viz [Konfigurační reference](docs/config-reference.md#runtime) pro kompletní možnosti bezpečnosti.
## Nasazení
### Lokální Nasazení (Vývoj)
```bash
zeroclaw daemon start
zeroclaw agent start
```
### Serverové Nasazení (Produkce)
Použijte systemd pro správu daemon a agent jako služby:
```bash
# Nainstalujte binary
cargo install --path . --locked
# Nakonfigurujte workspace
zeroclaw init
# Vytvořte systemd servisní soubory
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
# Povolte a spusťte služby
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
sudo systemctl start zeroclaw-daemon zeroclaw-agent
# Ověřte stav
sudo systemctl status zeroclaw-daemon
sudo systemctl status zeroclaw-agent
```
Viz [Průvodce síťovým nasazením](docs/network-deployment.md) pro kompletní instrukce produkčního nasazení.
### Docker
```bash
# Sestavte image
docker build -t zeroclaw:latest .
# Spusťte kontejner
docker run -d \
--name zeroclaw \
-v ~/.zeroclaw/workspace:/workspace \
-e ANTHROPIC_API_KEY=sk-ant-... \
zeroclaw:latest
```
Viz [`Dockerfile`](Dockerfile) pro detaily sestavení a konfigurační možnosti.
### Edge Hardware
ZeroClaw je navržen pro běh na nízko-příkonovém hardwaru:
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, jedno ARMv8 jádro, < $5 hardwarové náklady
- **Raspberry Pi 4/5** — 1 GB+ RAM, vícejádrový, ideální pro souběžné úlohy
- **Orange Pi Zero 2** — ~512 MB RAM, čtyřjádrový ARMv8, ultra-nízké náklady
- **x86 SBCs (Intel N100)** — 4-8 GB RAM, rychlé buildy, nativní Docker podpora
Viz [Hardware Guide](docs/hardware/README.md) pro instrukce nastavení specifické pro zařízení.
## Tunneling (Veřejná Expozice)
Exponujte svůj lokální ZeroClaw daemon do veřejné sítě přes bezpečné tunely:
```bash
zeroclaw tunnel start --provider cloudflare
```
Podporovaní tunnel poskytovatelé:
- **Cloudflare Tunnel** — bezplatný HTTPS, bez expozice portů, multi-doména podpora
- **Ngrok** — rychlé nastavení, vlastní domény (placený plán)
- **Tailscale** — soukromá mesh síť, bez veřejného portu
Viz [Konfigurační reference](docs/config-reference.md#tunnel) pro kompletní konfigurační možnosti.
## Bezpečnost
ZeroClaw implementuje více vrstev bezpečnosti:
### Párování
Daemon generuje párovací tajemství při prvním spuštění uložené v `~/.zeroclaw/workspace/.pairing`. Klienti (agent, CLI) musí předložit toto tajemství pro připojení.
```bash
zeroclaw pairing rotate # Generuje nové tajemství a zneplatňuje staré
```
### Sandboxing
- **Docker Runtime** — plná kontejnerová izolace s oddělenými souborovými systémy a sítěmi
- **Nativní Runtime** — běží jako uživatelský proces, scoped na workspace defaultně
### Allowlisty
Kanály mohou omezit přístup podle user ID:
```toml
[channels.telegram]
enabled = true
allowed_users = [123456789, 987654321] # Explicitní allowlist
```
### Šifrování
- **Matrix E2EE** — plné end-to-end šifrování s ověřením zařízení
- **TLS Transport** — veškerý API a tunnel provoz používá HTTPS/TLS
Viz [Bezpečnostní dokumentace](docs/security/README.md) pro kompletní politiky a praktiky.
## Pozorovatelnost
ZeroClaw loguje do `~/.zeroclaw/workspace/logs/` defaultně. Logy jsou ukládány podle komponenty:
```
~/.zeroclaw/workspace/logs/
├── daemon.log # Daemon logy (startup, API požadavky, chyby)
├── agent.log # Agent logy (směrování zpráv, provedení nástrojů)
├── telegram.log # Kanál-specifické logy (pokud povoleno)
└── matrix.log # Kanál-specifické logy (pokud povoleno)
```
### Konfigurace Logování
```toml
[logging]
level = "info" # debug, info, warn, error
path = "~/.zeroclaw/workspace/logs/"
rotation = "daily" # daily, hourly, size
max_size_mb = 100 # Pro rotaci založenou na velikosti
retention_days = 30 # Automatické čištění po N dnech
```
Viz [Konfigurační reference](docs/config-reference.md#logging) pro všechny možnosti logování.
### Metriky (Plánováno)
Podpora Prometheus metrik pro produkční monitoring již brzy. Sledování v [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
## Dovednosti
ZeroClaw podporuje vlastní dovednosti — opakovaně použitelné moduly rozšiřující schopnosti systému.
### Definice Dovednosti
Dovednosti jsou uloženy v `~/.zeroclaw/workspace/skills/<skill-name>/` s touto strukturou:
```
skills/
└── my-skill/
├── skill.toml # Metadata dovednosti (název, popis, závislosti)
├── prompt.md # Systémový prompt pro AI
└── tools/ # Volitelné vlastní nástroje
└── my_tool.py
```
### Příklad Dovednosti
```toml
# skills/web-research/skill.toml
[skill]
name = "web-research"
description = "Hledá na webu a shrnuje výsledky"
version = "1.0.0"
[dependencies]
tools = ["web_fetch", "bash"]
```
```markdown
<!-- skills/web-research/prompt.md -->
Jste výzkumný asistent. Když požádáte o výzkum něčeho:
1. Použijte web_fetch pro získání obsahu
2. Shrňte výsledky v snadno čitelném formátu
3. Citujte zdroje s URL
```
### Použití Dovedností
Dovednosti jsou automaticky načítány při startu agenta. Odkazujte na ně jménem v konverzacích:
```
Uživatel: Použij dovednost web-research k nalezení nejnovějších AI zpráv
Bot: [načte dovednost web-research, provede web_fetch, shrne výsledky]
```
Viz sekce [Dovednosti](#dovednosti) pro kompletní instrukce tvorby dovedností.
## Open Skills
ZeroClaw podporuje [Open Skills](https://github.com/openagents-com/open-skills) — modulární a poskytovatel-agnostický systém pro rozšíření schopností AI agentů.
### Povolit Open Skills
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # volitelné
```
Můžete také přepsat za běhu pomocí `ZEROCLAW_OPEN_SKILLS_ENABLED` a `ZEROCLAW_OPEN_SKILLS_DIR`.
## Vývoj
```bash
cargo build # Dev build
cargo build --release # Release build (codegen-units=1, funguje na všech zařízeních včetně Raspberry Pi)
cargo build --profile release-fast # Rychlejší build (codegen-units=8, vyžaduje 16 GB+ RAM)
cargo test # Spustí plnou testovací sadu
cargo clippy --locked --all-targets -- -D clippy::correctness
cargo fmt # Formátování
# Spusťte SQLite vs Markdown srovnávací benchmark
cargo test --test memory_comparison -- --nocapture
```
### Pre-push hook
Git hook spouští `cargo fmt --check`, `cargo clippy -- -D warnings`, a `cargo test` před každým push. Povolte jej jednou:
```bash
git config core.hooksPath .githooks
```
### Řešení problémů s Buildem (OpenSSL chyby na Linuxu)
Pokud narazíte na `openssl-sys` build chybu, synchronizujte závislosti a znovu zkompilujte s lockfile repoziťáře:
```bash
git pull
cargo build --release --locked
cargo install --path . --force --locked
```
ZeroClaw je nakonfigurován pro použití `rustls` pro HTTP/TLS závislosti; `--locked` udržuje transitivní graf deterministický v čistých prostředích.
Pro přeskočení hooku když potřebujete rychlý push během vývoje:
```bash
git push --no-verify
```
## Spolupráce & Docs
Začněte s dokumentačním centrem pro task-based mapu:
- Dokumentační Centrum: [`docs/README.md`](docs/README.md)
- Sjednocený Docs TOC: [`docs/SUMMARY.md`](docs/SUMMARY.md)
- Příkazová reference: [`docs/commands-reference.md`](docs/commands-reference.md)
- Konfigurační reference: [`docs/config-reference.md`](docs/config-reference.md)
- Poskytovatel reference: [`docs/providers-reference.md`](docs/providers-reference.md)
- Kanálová reference: [`docs/channels-reference.md`](docs/channels-reference.md)
- Operations Runbook: [`docs/operations-runbook.md`](docs/operations-runbook.md)
- Řešení problémů: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- Docs Inventář/Klasifikace: [`docs/docs-inventory.md`](docs/docs-inventory.md)
- PR/Issue Triage Snapshot (k 18. únoru 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
Hlavní spolupráční reference:
- Dokumentační Centrum: [docs/README.md](docs/README.md)
- Šablona dokumentace: [docs/doc-template.md](docs/doc-template.md)
- Checklist změn dokumentace: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
- Reference konfigurace kanálů: [docs/channels-reference.md](docs/channels-reference.md)
- Operace šifrovaných místností Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
- Průvodce příspíváním: [CONTRIBUTING.md](CONTRIBUTING.md)
- PR Workflow politika: [docs/pr-workflow.md](docs/pr-workflow.md)
- Reviewer Playbook (triage + hluboká recenze): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
- Mapa vlastnictví a CI triage: [docs/ci-map.md](docs/ci-map.md)
- Bezpečnostní disclosure politika: [SECURITY.md](SECURITY.md)
Pro nasazení a runtime operace:
- Průvodce síťovým nasazením: [docs/network-deployment.md](docs/network-deployment.md)
- Proxy Agent Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
## Podpořte ZeroClaw
Pokud ZeroClaw pomáhá vaší práci a chcete podpořit pokračující vývoj, můžete darovat zde:
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Kup Mi Kávu" /></a>
### 🙏 Speciální Poděkování
Upřímné poděkování komunitám a institucím které inspirují a živí tuto open-source práci:
- **Harvard University** — za podporu intelektuální zvídavosti a posouvání hranic toho co je možné.
- **MIT** — za obhajobu otevřeného vědění, open source, a přesvědčení že technologie by měla být přístupná všem.
- **Sundai Club** — za komunitu, energii, a neustálou vůli stavět věci které na něčem záleží.
- **Svět a Dál** 🌍✨ — každému přispěvateli, snílkovi, a staviteli tam venku který dělá z open source sílu pro dobro. To je pro tebe.
Stavíme v open source protože nejlepší nápady přicházejí odkudkoliv. Pokud toto čtete, jste součástí toho. Vítejte. 🦀❤️
## ⚠️ Oficiální Repoziťář a Varování před Vydáváním se
**Toto je jediný oficiální ZeroClaw repoziťář:**
> <https://github.com/zeroclaw-labs/zeroclaw>
Jakýkoliv jiný repoziťář, organizace, doména nebo balík tvrdící že je "ZeroClaw" nebo naznačující afiliaci s ZeroClaw Labs je **neautorizovaný a není spojen s tímto projektem**. Známé neautorizované forky budou uvedeny v [TRADEMARK.md](TRADEMARK.md).
Pokud narazíte na vydávání se nebo zneužití ochranné známky, prosím [otevřete issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## Licence
ZeroClaw je duálně licencován pro maximální otevřenost a ochranu přispěvatelů:
| Licence | Případy použití |
| ---------------------------- | ------------------------------------------------------------ |
| [MIT](LICENSE-MIT) | Open-source, výzkum, akademické, osobní použití |
| [Apache 2.0](LICENSE-APACHE) | Ochrana patentů, institucionální, komerční nasazení |
Můžete si vybrat jednu z licencí. **Přispěvatelé automaticky udělují práva pod oběma** — viz [CLA.md](CLA.md) pro plnou dohodu přispěvatele.
### Ochranná známka
Název **ZeroClaw** a logo jsou registrované ochranné známky ZeroClaw Labs. Tato licence neuděluje povolení je používat k naznačení schválení nebo afiliace. Viz [TRADEMARK.md](TRADEMARK.md) pro povolená a zakázaná použití.
### Ochrany přispěvatelů
- **Si zachováváte autorská práva** k vašim příspěvkům
- **Patentový grant** (Apache 2.0) vás chrání před patentovými nároky ostatních přispěvatelů
- Vaše příspěvky jsou **trvale připsány** v historii commitů a [NOTICE](NOTICE)
- Žádná práva ochranné známky nejsou přenesena příspěvkem
## Příspívání
Viz [CONTRIBUTING.md](CONTRIBUTING.md) a [CLA.md](CLA.md). Implementujte trait, odešlete PR:
- Průvodce CI workflow: [docs/ci-map.md](docs/ci-map.md)
- Nový `Provider``src/providers/`
- Nový `Channel``src/channels/`
- Nový `Observer``src/observability/`
- Nový `Tool``src/tools/`
- Nová `Memory``src/memory/`
- Nový `Tunnel``src/tunnel/`
- Nová `Skill``~/.zeroclaw/workspace/skills/<n>/`
---
**ZeroClaw** — Nulová režie. Nulové kompromisy. Nasazujte kdekoliv. Měňte cokoliv. 🦀
## Historie Hvězd
<p align="center">
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
<img alt="Graf Historie Hvězd" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
</picture>
</a>
</p>
+179
View File
@@ -0,0 +1,179 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Nul overhead. Nul kompromis. 100% Rust. 100% Agnostisk.</strong><br>
⚡️ <strong>Kører på $10 hardware med <5MB RAM: Det er 99% mindre hukommelse end OpenClaw og 98% billigere end en Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center">
🌐 <strong>Sprog:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
## Hvad er ZeroClaw?
ZeroClaw er en letvægts, foranderlig og udvidbar AI-assistent-infrastruktur bygget i Rust. Den forbinder forskellige LLM-udbydere (Anthropic, OpenAI, Google, Ollama osv.) via en samlet grænseflade og understøtter flere kanaler (Telegram, Matrix, CLI osv.).
### Nøglefunktioner
- **🦀 Skrevet i Rust**: Høj ydeevne, hukommelsessikkerhed og nul-omkostningsabstraktioner
- **🔌 Udbyder-agnostisk**: Understøtter OpenAI, Anthropic, Google Gemini, Ollama og andre
- **📱 Multi-kanal**: Telegram, Matrix (med E2EE), CLI og andre
- **🧠 Pluggbar hukommelse**: SQLite og Markdown-backends
- **🛠️ Udvidbare værktøjer**: Tilføj brugerdefinerede værktøjer nemt
- **🔒 Sikkerhed først**: Omvendt proxy, privatlivs-først design
---
## Hurtig Start
### Krav
- Rust 1.70+
- En LLM-udbyder API-nøgle (Anthropic, OpenAI osv.)
### Installation
```bash
# Klon repository
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# Byg
cargo build --release
# Kør
cargo run --release
```
### Med Docker
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## Konfiguration
ZeroClaw bruger en YAML-konfigurationsfil. Som standard leder den efter `config.yaml`.
```yaml
# Standardudbyder
provider: anthropic
# Udbyderkonfiguration
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# Hukommelseskonfiguration
memory:
backend: sqlite
path: data/memory.db
# Kanalkonfiguration
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## Dokumentation
For detaljeret dokumentation, se:
- [Dokumentationshub](docs/README.md)
- [Kommandoreference](docs/commands-reference.md)
- [Udbyderreference](docs/providers-reference.md)
- [Kanalreference](docs/channels-reference.md)
- [Konfigurationsreference](docs/config-reference.md)
---
## Bidrag
Bidrag er velkomne! Læs venligst [Bidragsguiden](CONTRIBUTING.md).
---
## Licens
Dette projekt er dobbelt-licenseret:
- MIT License
- Apache License, version 2.0
Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
---
## Fællesskab
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## Sponsorer
Hvis ZeroClaw er nyttigt for dig, overvej venligst at købe os en kaffe:
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)
+918
View File
@@ -0,0 +1,918 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Null Overhead. Null Kompromiss. 100% Rust. 100% Agnostisch.</strong><br>
⚡️ <strong>Läuft auf 10$ Hardware mit <5MB RAM: Das ist 99% weniger Speicher als OpenClaw und 98% günstiger als ein Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
Erstellt von Studenten und Mitgliedern der Harvard, MIT und Sundai.Club Gemeinschaften.
</p>
<p align="center">
🌐 <strong>Sprachen:</strong><a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
<p align="center">
<a href="#schnellstart">Schnellstart</a> |
<a href="bootstrap.sh">Ein-Klick-Einrichtung</a> |
<a href="docs/README.md">Dokumentations-Hub</a> |
<a href="docs/SUMMARY.md">Dokumentations-Inhaltsverzeichnis</a>
</p>
<p align="center">
<em>📝 Hinweis: Die Dokumentationslinks verweisen auf die englischsprachige Dokumentation. Lokalisierte Dokumentation für Deutsch ist noch nicht verfügbar.</em>
</p>
<p align="center">
<strong>Schnellzugriffe:</strong>
<a href="docs/reference/README.md">Referenz</a> ·
<a href="docs/operations/README.md">Betrieb</a> ·
<a href="docs/troubleshooting.md">Fehlerbehebung</a> ·
<a href="docs/security/README.md">Sicherheit</a> ·
<a href="docs/hardware/README.md">Hardware</a> ·
<a href="docs/contributing/README.md">Mitwirken</a>
</p>
<p align="center">
<strong>Schnelle, leichtgewichtige und vollständig autonome KI-Assistenten-Infrastruktur</strong><br />
Deploy überall. Tausche alles.
</p>
<p align="center">
ZeroClaw ist das <strong>Runtime-Betriebssystem</strong> für Agenten-Workflows — eine Infrastruktur, die Modelle, Tools, Speicher und Ausführung abstrahiert, um Agenten einmal zu bauen und überall auszuführen.
</p>
<p align="center"><code>Trait-basierte Architektur · sicheres Runtime standardmäßig · Provider/Channel/Tool austauschbar · alles ist steckbar</code></p>
### 📢 Ankündigungen
Verwende diese Tabelle für wichtige Hinweise (Kompatibilitätsänderungen, Sicherheitshinweise, Wartungsfenster und Versionsblockierungen).
| Datum (UTC) | Ebene | Hinweis | Aktion |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Kritisch_ | Wir sind **nicht verbunden** mit `openagen/zeroclaw` oder `zeroclaw.org`. Die Domain `zeroclaw.org` zeigt derzeit auf den Fork `openagen/zeroclaw`, und diese Domain/Repository fälscht unsere offizielle Website/Projekt. | Vertraue keinen Informationen, Binärdateien, Fundraising oder Ankündigungen aus diesen Quellen. Verwende nur [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) und unsere verifizierten Social-Media-Konten. |
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
| 2026-02-19 | _Wichtig_ | Anthropic hat die Nutzungsbedingungen für Authentifizierung und Anmeldedaten am 2026-02-19 aktualisiert. Die OAuth-Authentifizierung (Free, Pro, Max) ist ausschließlich für Claude Code und Claude.ai; die Verwendung von Claude Free/Pro/Max OAuth-Token in einem anderen Produkt, Tool oder Dienst (einschließlich Agent SDK) ist nicht erlaubt und kann gegen die Verbrauchernutzungsbedingungen verstoßen. | Bitte vermeide vorübergehend Claude Code OAuth-Integrationen, um potenzielle Verluste zu verhindern. Originalklausel: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Funktionen
- 🏎️ **Leichtgewichtiges Runtime standardmäßig:** Gängige CLI-Workflows und Statusbefehle laufen in einem Speicherbereich von wenigen Megabyte bei Produktions-Builds.
- 💰 **Kosteneffizientes Deployment:** Entwickelt für Low-Cost-Boards und kleine Cloud-Instanzen ohne schwere Runtime-Abhängigkeiten.
- ⚡ **Schnelle Kaltstarts:** Die Single-Binary-Rust-Runtime hält Befehls- und Daemon-Starts für tägliche Operationen nahezu augenblicklich.
- 🌍 **Portable Architektur:** Ein Single-Binary-Workflow auf ARM, x86 und RISC-V mit austauschbaren Providern/Channels/Tools.
### Warum Teams ZeroClaw wählen
- **Leichtgewichtig standardmäßig:** kleines Rust-Binary, schneller Start, geringer Speicherbedarf.
- **Sicher by Design:** Pairing, striktes Sandboxing, explizite Allowlists, Workspace-Scope.
- **Vollständig austauschbar:** Kernsysteme sind Traits (Provider, Channels, Tools, Speicher, Tunnel).
- **Kein Provider-Lock-in:** OpenAI-kompatible Provider-Unterstützung + steckbare Custom-Endpoints.
## Benchmark-Snapshot (ZeroClaw vs OpenClaw, Reproduzierbar)
Schneller Benchmark auf lokalem Rechner (macOS arm64, Feb. 2026) normalisiert für 0.8 GHz Edge-Hardware.
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
| **Sprache** | TypeScript | Python | Go | **Rust** |
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
| **Start (0.8 GHz Kern)** | > 500s | > 30s | < 1s | **< 10ms** |
| **Binary-Größe** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
| **Kosten** | Mac Mini $599 | Linux SBC ~$50 | Linux-Board $10 | **Jede Hardware $10** |
> Hinweise: ZeroClaw-Ergebnisse werden auf Produktions-Builds mit `/usr/bin/time -l` gemessen. OpenClaw benötigt die Node.js-Runtime (typischerweise ~390 MB zusätzlicher Speicher-Overhead), während NanoBot die Python-Runtime benötigt. PicoClaw und ZeroClaw sind statische Binaries. Die oben genannten RAM-Zahlen sind Runtime-Speicher; Build-time-Kompilierungsanforderungen sind höher.
<p align="center">
<img src="zero-claw.jpeg" alt="ZeroClaw vs OpenClaw Vergleich" width="800" />
</p>
### Reproduzierbare lokale Messung
Benchmark-Behauptungen können sich ändern, wenn Code und Toolchains sich weiterentwickeln, also miss deinen aktuellen Build immer lokal:
```bash
cargo build --release
ls -lh target/release/zeroclaw
/usr/bin/time -l target/release/zeroclaw --help
/usr/bin/time -l target/release/zeroclaw status
```
Beispielstichprobe (macOS arm64, gemessen am 18. Februar 2026):
- Release-Binary-Größe: `8.8M`
- `zeroclaw --help`: Echtzeit ca. `0.02s`, maximaler Speicherbedarf ~`3.9 MB`
- `zeroclaw status`: Echtzeit ca. `0.01s`, maximaler Speicherbedarf ~`4.1 MB`
## Voraussetzungen
<details>
<summary><strong>Windows</strong></summary>
### Windows — Erforderlich
1. **Visual Studio Build Tools** (stellt MSVC-Linker und Windows SDK bereit):
```powershell
winget install Microsoft.VisualStudio.2022.BuildTools
```
Wähle während der Installation (oder über Visual Studio Installer) die Workload **"Desktop-Entwicklung mit C++"**.
2. **Rust-Toolchain:**
```powershell
winget install Rustlang.Rustup
```
Öffne nach der Installation ein neues Terminal und führe `rustup default stable` aus, um sicherzustellen, dass die stabile Toolchain aktiv ist.
3. **Überprüfe**, dass beide funktionieren:
```powershell
rustc --version
cargo --version
```
### Windows — Optional
- **Docker Desktop** — nur erforderlich, wenn du die [Docker-Sandbox-Runtime](#aktuelle-runtime-unterstützung) verwendest (`runtime.kind = "docker"`). Installiere über `winget install Docker.DockerDesktop`.
</details>
<details>
<summary><strong>Linux / macOS</strong></summary>
### Linux / macOS — Erforderlich
1. **Essentielle Build-Tools:**
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
- **macOS:** Installiere Xcode Command Line Tools: `xcode-select --install`
2. **Rust-Toolchain:**
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Siehe [rustup.rs](https://rustup.rs) für Details.
3. **Überprüfe:**
```bash
rustc --version
cargo --version
```
### Linux / macOS — Optional
- **Docker** — nur erforderlich, wenn du die [Docker-Sandbox-Runtime](#aktuelle-runtime-unterstützung) verwendest (`runtime.kind = "docker"`).
- **Linux (Debian/Ubuntu):** siehe [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
- **Linux (Fedora/RHEL):** siehe [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
- **macOS:** installiere Docker Desktop über [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
</details>
## Schnellstart
### Option 1: Automatisierte Einrichtung (empfohlen)
Das `bootstrap.sh`-Skript installiert Rust, klont ZeroClaw, kompiliert es und richtet deine anfängliche Entwicklungsumgebung ein:
```bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
```
Dies wird:
1. Rust installieren (falls nicht vorhanden)
2. Das ZeroClaw-Repository klonen
3. ZeroClaw im Release-Modus kompilieren
4. `zeroclaw` in `~/.cargo/bin/` installieren
5. Die Standard-Workspace-Struktur in `~/.zeroclaw/workspace/` erstellen
6. Eine Startkonfigurationsdatei `~/.zeroclaw/workspace/config.toml` generieren
Nach dem Bootstrap lade deine Shell neu oder führe `source ~/.cargo/env` aus, um den `zeroclaw`-Befehl global zu verwenden.
### Option 2: Manuelle Installation
<details>
<summary><strong>Klicke, um die manuellen Installationsschritte zu sehen</strong></summary>
```bash
# 1. Klone das Repository
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# 2. Kompiliere im Release-Modus
cargo build --release --locked
# 3. Installiere das Binary
cargo install --path . --locked
# 4. Initialisiere den Workspace
zeroclaw init
# 5. Überprüfe die Installation
zeroclaw --version
zeroclaw status
```
</details>
### Nach der Installation
Nach der Installation (via Bootstrap oder manuell) solltest du sehen:
```
~/.zeroclaw/workspace/
├── config.toml # Hauptkonfiguration
├── .pairing # Pairing-Geheimnisse (beim ersten Start generiert)
├── logs/ # Daemon/Agent-Logs
├── skills/ # Benutzerdefinierte Skills
└── memory/ # Konversationskontext-Speicherung
```
**Nächste Schritte:**
1. Konfiguriere deine KI-Provider in `~/.zeroclaw/workspace/config.toml`
2. Sieh dir die [Konfigurationsreferenz](docs/config-reference.md) für erweiterte Optionen an
3. Starte den Agent: `zeroclaw agent start`
4. Teste über deinen bevorzugten Channel (siehe [Channel-Referenz](docs/channels-reference.md))
## Konfiguration
Bearbeite `~/.zeroclaw/workspace/config.toml`, um Provider, Channels und Systemverhalten zu konfigurieren.
### Schnelle Konfigurationsreferenz
```toml
[providers.anthropic]
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
api_key = "sk-..."
model = "gpt-4o"
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@bot:matrix.org"
password = "..."
[memory]
kind = "markdown" # oder "sqlite" oder "none"
[runtime]
kind = "native" # oder "docker" (erfordert Docker)
```
**Vollständige Referenzdokumente:**
- [Konfigurationsreferenz](docs/config-reference.md) — alle Einstellungen, Validierungen, Standardwerte
- [Provider-Referenz](docs/providers-reference.md) — KI-Provider-spezifische Konfigurationen
- [Channel-Referenz](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord und mehr
- [Betrieb](docs/operations-runbook.md) — Produktionsüberwachung, Secret-Rotation, Skalierung
### Aktuelle Runtime-Unterstützung
ZeroClaw unterstützt zwei Code-Ausführungs-Backends:
- **`native`** (Standard) — direkte Prozessausführung, schnellster Pfad, ideal für vertrauenswürdige Umgebungen
- **`docker`** — vollständige Container-Isolierung, gehärtete Sicherheitsrichtlinien, erfordert Docker
Verwende `runtime.kind = "docker"`, wenn du striktes Sandboxing oder Netzwerkisolierung benötigst. Siehe [Konfigurationsreferenz](docs/config-reference.md#runtime) für vollständige Details.
## Befehle
```bash
# Workspace-Verwaltung
zeroclaw init # Initialisiert einen neuen Workspace
zeroclaw status # Zeigt Daemon/Agent-Status
zeroclaw config validate # Überprüft config.toml Syntax und Werte
# Daemon-Verwaltung
zeroclaw daemon start # Startet den Daemon im Hintergrund
zeroclaw daemon stop # Stoppt den laufenden Daemon
zeroclaw daemon restart # Startet den Daemon neu (Config-Neuladen)
zeroclaw daemon logs # Zeigt Daemon-Logs
# Agent-Verwaltung
zeroclaw agent start # Startet den Agent (erfordert laufenden Daemon)
zeroclaw agent stop # Stoppt den Agent
zeroclaw agent restart # Startet den Agent neu (Config-Neuladen)
# Pairing-Operationen
zeroclaw pairing init # Generiert ein neues Pairing-Geheimnis
zeroclaw pairing rotate # Rotiert das bestehende Pairing-Geheimnis
# Tunneling (für öffentliche Exposition)
zeroclaw tunnel start # Startet einen Tunnel zum lokalen Daemon
zeroclaw tunnel stop # Stoppt den aktiven Tunnel
# Diagnose
zeroclaw doctor # Führt System-Gesundheitsprüfungen durch
zeroclaw version # Zeigt Version und Build-Informationen
```
Siehe [Befehlsreferenz](docs/commands-reference.md) für vollständige Optionen und Beispiele.
## Architektur
```
┌─────────────────────────────────────────────────────────────────┐
│ Channels (Trait) │
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
└─────────────────────────┬───────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Agent-Orchestrator │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Routing │ │ Kontext │ │ Ausführung │ │
│ │ Nachricht │ │ Speicher │ │ Werkzeug │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Provider │ │ Speicher │ │ Werkzeuge │
│ (Trait) │ │ (Trait) │ │ (Trait) │
├──────────────┤ ├──────────────┤ ├──────────────┤
│ Anthropic │ │ Markdown │ │ Filesystem │
│ OpenAI │ │ SQLite │ │ Bash │
│ Gemini │ │ None │ │ Web Fetch │
│ Ollama │ │ Custom │ │ Custom │
│ Custom │ └──────────────┘ └──────────────┘
└──────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Runtime (Trait) │
│ Native │ Docker │
└─────────────────────────────────────────────────────────────────┘
```
**Schlüsselprinzipien:**
- Alles ist ein **Trait** — Provider, Channels, Tools, Speicher, Tunnel
- Channels rufen den Orchestrator auf; der Orchestrator ruft Provider + Tools auf
- Das Speichersystem verwaltet Konversationskontext (Markdown, SQLite, oder keiner)
- Das Runtime abstrahiert Code-Ausführung (nativ oder Docker)
- Kein Provider-Lock-in — tausche Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama ohne Code-Änderungen
Siehe [Architektur-Dokumentation](docs/architecture.svg) für detaillierte Diagramme und Implementierungsdetails.
## Beispiele
### Telegram-Bot
```toml
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
allowed_users = [987654321] # Deine Telegram-Benutzer-ID
```
Starte den Daemon + Agent, dann sende eine Nachricht an deinen Bot auf Telegram:
```
/start
Hallo! Könntest du mir helfen, ein Python-Skript zu schreiben?
```
Der Bot antwortet mit KI-generiertem Code, führt Tools auf Anfrage aus und behält den Konversationskontext.
### Matrix (Ende-zu-Ende-Verschlüsselung)
```toml
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@zeroclaw:matrix.org"
password = "..."
device_name = "zeroclaw-prod"
e2ee_enabled = true
```
Lade `@zeroclaw:matrix.org` in einen verschlüsselten Raum ein, und der Bot wird mit vollständiger Verschlüsselung antworten. Siehe [Matrix E2EE-Leitfaden](docs/matrix-e2ee-guide.md) für Geräteverifizierungs-Setup.
### Multi-Provider
```toml
[providers.anthropic]
enabled = true
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
enabled = true
api_key = "sk-..."
model = "gpt-4o"
[orchestrator]
default_provider = "anthropic"
fallback_providers = ["openai"] # Failover bei Provider-Fehler
```
Wenn Anthropic fehlschlägt oder Rate-Limit erreicht, wechselt der Orchestrator automatisch zu OpenAI.
### Benutzerdefinierter Speicher
```toml
[memory]
kind = "sqlite"
path = "~/.zeroclaw/workspace/memory/conversations.db"
retention_days = 90 # Automatische Bereinigung nach 90 Tagen
```
Oder verwende Markdown für menschenlesbaren Speicher:
```toml
[memory]
kind = "markdown"
path = "~/.zeroclaw/workspace/memory/"
```
Siehe [Konfigurationsreferenz](docs/config-reference.md#memory) für alle Speicheroptionen.
## Provider-Unterstützung
| Provider | Status | API-Schlüssel | Beispielmodelle |
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
| **Anthropic** | ✅ Stabil | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
| **OpenAI** | ✅ Stabil | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
| **Google Gemini** | ✅ Stabil | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
| **Ollama** | ✅ Stabil | N/A (lokal) | `llama3.3`, `qwen2.5`, `phi4` |
| **Cerebras** | ✅ Stabil | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
| **Groq** | ✅ Stabil | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
| **Mistral** | 🚧 Geplant | `MISTRAL_API_KEY` | TBD |
| **Cohere** | 🚧 Geplant | `COHERE_API_KEY` | TBD |
### Benutzerdefinierte Endpoints
ZeroClaw unterstützt OpenAI-kompatible Endpoints:
```toml
[providers.custom]
enabled = true
api_key = "..."
base_url = "https://api.your-llm-provider.com/v1"
model = "your-model-name"
```
Beispiel: verwende [LiteLLM](https://github.com/BerriAI/litellm) als Proxy, um auf jedes LLM über die OpenAI-Schnittstelle zuzugreifen.
Siehe [Provider-Referenz](docs/providers-reference.md) für vollständige Konfigurationsdetails.
## Channel-Unterstützung
| Channel | Status | Authentifizierung | Hinweise |
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
| **Telegram** | ✅ Stabil | Bot-Token | Vollständige Unterstützung inklusive Dateien, Bilder, Inline-Buttons |
| **Matrix** | ✅ Stabil | Passwort oder Token | E2EE-Unterstützung mit Geräteverifizierung |
| **Slack** | 🚧 Geplant | OAuth oder Bot-Token | Erfordert Workspace-Zugriff |
| **Discord** | 🚧 Geplant | Bot-Token | Erfordert Guild-Berechtigungen |
| **WhatsApp** | 🚧 Geplant | Twilio oder offizielle API | Erfordert Business-Konto |
| **CLI** | ✅ Stabil | Keine | Direkte konversationelle Schnittstelle |
| **Web** | 🚧 Geplant | API-Schlüssel oder OAuth | Browserbasierte Chat-Schnittstelle |
Siehe [Channel-Referenz](docs/channels-reference.md) für vollständige Konfigurationsanleitungen.
## Tool-Unterstützung
ZeroClaw bietet integrierte Tools für Code-Ausführung, Dateisystemzugriff und Web-Abruf:
| Tool | Beschreibung | Erforderliches Runtime |
| -------------------- | --------------------------- | ----------------------------- |
| **bash** | Führt Shell-Befehle aus | Nativ oder Docker |
| **python** | Führt Python-Skripte aus | Python 3.8+ (nativ) oder Docker |
| **javascript** | Führt Node.js-Code aus | Node.js 18+ (nativ) oder Docker |
| **filesystem_read** | Liest Dateien | Nativ oder Docker |
| **filesystem_write** | Schreibt Dateien | Nativ oder Docker |
| **web_fetch** | Ruft Web-Inhalte ab | Nativ oder Docker |
### Ausführungssicherheit
- **Natives Runtime** — läuft als Benutzerprozess des Daemons, voller Dateisystemzugriff
- **Docker-Runtime** — vollständige Container-Isolierung, separate Dateisysteme und Netzwerke
Konfiguriere die Ausführungsrichtlinie in `config.toml`:
```toml
[runtime]
kind = "docker"
allowed_tools = ["bash", "python", "filesystem_read"] # Explizite Allowlist
```
Siehe [Konfigurationsreferenz](docs/config-reference.md#runtime) für vollständige Sicherheitsoptionen.
## Deployment
### Lokales Deployment (Entwicklung)
```bash
zeroclaw daemon start
zeroclaw agent start
```
### Server-Deployment (Produktion)
Verwende systemd, um Daemon und Agent als Dienste zu verwalten:
```bash
# Installiere das Binary
cargo install --path . --locked
# Konfiguriere den Workspace
zeroclaw init
# Erstelle systemd-Dienstdateien
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
# Aktiviere und starte die Dienste
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
sudo systemctl start zeroclaw-daemon zeroclaw-agent
# Überprüfe den Status
sudo systemctl status zeroclaw-daemon
sudo systemctl status zeroclaw-agent
```
Siehe [Netzwerk-Deployment-Leitfaden](docs/network-deployment.md) für vollständige Produktions-Deployment-Anleitungen.
### Docker
```bash
# Baue das Image
docker build -t zeroclaw:latest .
# Führe den Container aus
docker run -d \
--name zeroclaw \
-v ~/.zeroclaw/workspace:/workspace \
-e ANTHROPIC_API_KEY=sk-ant-... \
zeroclaw:latest
```
Siehe [`Dockerfile`](Dockerfile) für Build-Details und Konfigurationsoptionen.
### Edge-Hardware
ZeroClaw ist für den Betrieb auf Low-Power-Hardware konzipiert:
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, einzelner ARMv8-Kern, < $5 Hardware-Kosten
- **Raspberry Pi 4/5** — 1 GB+ RAM, Multi-Core, ideal für gleichzeitige Workloads
- **Orange Pi Zero 2** — ~512 MB RAM, Quad-Core ARMv8, Ultra-Low-Cost
- **x86 SBCs (Intel N100)** — 4-8 GB RAM, schnelle Builds, nativer Docker-Support
Siehe [Hardware-Leitfaden](docs/hardware/README.md) für gerätespezifische Einrichtungsanleitungen.
## Tunneling (Öffentliche Exposition)
Exponiere deinen lokalen ZeroClaw-Daemon über sichere Tunnel zum öffentlichen Netzwerk:
```bash
zeroclaw tunnel start --provider cloudflare
```
Unterstützte Tunnel-Provider:
- **Cloudflare Tunnel** — kostenloses HTTPS, keine Port-Exposition, Multi-Domain-Support
- **Ngrok** — schnelle Einrichtung, benutzerdefinierte Domains (kostenpflichtiger Plan)
- **Tailscale** — privates Mesh-Netzwerk, kein öffentlicher Port
Siehe [Konfigurationsreferenz](docs/config-reference.md#tunnel) für vollständige Konfigurationsoptionen.
## Sicherheit
ZeroClaw implementiert mehrere Sicherheitsebenen:
### Pairing
Der Daemon generiert beim ersten Start ein Pairing-Geheimnis, das in `~/.zeroclaw/workspace/.pairing` gespeichert wird. Clients (Agent, CLI) müssen dieses Geheimnis präsentieren, um eine Verbindung herzustellen.
```bash
zeroclaw pairing rotate # Generiert ein neues Geheimnis und erklärt das alte für ungültig
```
### Sandboxing
- **Docker-Runtime** — vollständige Container-Isolierung mit separaten Dateisystemen und Netzwerken
- **Natives Runtime** — läuft als Benutzerprozess, standardmäßig auf Workspace beschränkt
### Allowlists
Channels können den Zugriff nach Benutzer-ID einschränken:
```toml
[channels.telegram]
enabled = true
allowed_users = [123456789, 987654321] # Explizite Allowlist
```
### Verschlüsselung
- **Matrix E2EE** — vollständige Ende-zu-Ende-Verschlüsselung mit Geräteverifizierung
- **TLS-Transport** — der gesamte API- und Tunnel-Verkehr verwendet HTTPS/TLS
Siehe [Sicherheitsdokumentation](docs/security/README.md) für vollständige Richtlinien und Praktiken.
## Observability
ZeroClaw protokolliert standardmäßig in `~/.zeroclaw/workspace/logs/`. Logs werden nach Komponente gespeichert:
```
~/.zeroclaw/workspace/logs/
├── daemon.log # Daemon-Logs (Start, API-Anfragen, Fehler)
├── agent.log # Agent-Logs (Nachrichten-Routing, Tool-Ausführung)
├── telegram.log # Kanalspezifische Logs (falls aktiviert)
└── matrix.log # Kanalspezifische Logs (falls aktiviert)
```
### Logging-Konfiguration
```toml
[logging]
level = "info" # debug, info, warn, error
path = "~/.zeroclaw/workspace/logs/"
rotation = "daily" # daily, hourly, size
max_size_mb = 100 # Für größenbasierte Rotation
retention_days = 30 # Automatische Bereinigung nach N Tagen
```
Siehe [Konfigurationsreferenz](docs/config-reference.md#logging) für alle Logging-Optionen.
### Metriken (Geplant)
Prometheus-Metrik-Unterstützung für Produktionsüberwachung kommt bald. Verfolgung in [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
## Skills
ZeroClaw unterstützt benutzerdefinierte Skills — wiederverwendbare Module, die die Systemfähigkeiten erweitern.
### Skill-Definition
Skills werden in `~/.zeroclaw/workspace/skills/<skill-name>/` mit dieser Struktur gespeichert:
```
skills/
└── my-skill/
├── skill.toml # Skill-Metadaten (Name, Beschreibung, Abhängigkeiten)
├── prompt.md # System-Prompt für die KI
└── tools/ # Optionale benutzerdefinierte Tools
└── my_tool.py
```
### Skill-Beispiel
```toml
# skills/web-research/skill.toml
[skill]
name = "web-research"
description = "Sucht im Web und fasst Ergebnisse zusammen"
version = "1.0.0"
[dependencies]
tools = ["web_fetch", "bash"]
```
```markdown
<!-- skills/web-research/prompt.md -->
Du bist ein Forschungsassistent. Wenn du gebeten wirst, etwas zu recherchieren:
1. Verwende web_fetch, um den Inhalt abzurufen
2. Fasse die Ergebnisse in einem leicht lesbaren Format zusammen
3. Zitiere die Quellen mit URLs
```
### Skill-Verwendung
Skills werden beim Agent-Start automatisch geladen. Referenziere sie nach Namen in Konversationen:
```
Benutzer: Verwende den Web-Research-Skill, um die neuesten KI-Nachrichten zu finden
Bot: [lädt den Web-Research-Skill, führt web_fetch aus, fasst Ergebnisse zusammen]
```
Siehe Abschnitt [Skills](#skills) für vollständige Skill-Erstellungsanleitungen.
## Open Skills
ZeroClaw unterstützt [Open Skills](https://github.com/openagents-com/open-skills) — ein modulares und provider-agnostisches System zur Erweiterung von KI-Agenten-Fähigkeiten.
### Open Skills aktivieren
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # optional
```
Du kannst auch zur Laufzeit mit `ZEROCLAW_OPEN_SKILLS_ENABLED` und `ZEROCLAW_OPEN_SKILLS_DIR` überschreiben.
## Entwicklung
```bash
cargo build # Entwicklungs-Build
cargo build --release # Release-Build (codegen-units=1, funktioniert auf allen Geräten einschließlich Raspberry Pi)
cargo build --profile release-fast # Schnellerer Build (codegen-units=8, erfordert 16 GB+ RAM)
cargo test # Führt die vollständige Test-Suite aus
cargo clippy --locked --all-targets -- -D clippy::correctness
cargo fmt # Formatierung
# Führe den SQLite vs Markdown Vergleichs-Benchmark aus
cargo test --test memory_comparison -- --nocapture
```
### Pre-push-Hook
Ein Git-Hook führt `cargo fmt --check`, `cargo clippy -- -D warnings`, und `cargo test` vor jedem Push aus. Aktiviere ihn einmal:
```bash
git config core.hooksPath .githooks
```
### Build-Fehlerbehebung (OpenSSL-Fehler unter Linux)
Wenn du auf einen `openssl-sys`-Build-Fehler stößt, synchronisiere Abhängigkeiten und kompiliere mit dem Lockfile des Repositories neu:
```bash
git pull
cargo build --release --locked
cargo install --path . --force --locked
```
ZeroClaw ist so konfiguriert, dass es `rustls` für HTTP/TLS-Abhängigkeiten verwendet; `--locked` hält den transitiven Graphen in sauberen Umgebungen deterministisch.
Um den Hook zu überspringen, wenn du während der Entwicklung einen schnellen Push benötigst:
```bash
git push --no-verify
```
## Zusammenarbeit & Docs
Beginne mit dem Dokumentations-Hub für eine Aufgaben-basierte Karte:
- Dokumentations-Hub: [`docs/README.md`](docs/README.md)
- Vereinigtes Docs-Inhaltsverzeichnis: [`docs/SUMMARY.md`](docs/SUMMARY.md)
- Befehlsreferenz: [`docs/commands-reference.md`](docs/commands-reference.md)
- Konfigurationsreferenz: [`docs/config-reference.md`](docs/config-reference.md)
- Provider-Referenz: [`docs/providers-reference.md`](docs/providers-reference.md)
- Channel-Referenz: [`docs/channels-reference.md`](docs/channels-reference.md)
- Betriebshandbuch: [`docs/operations-runbook.md`](docs/operations-runbook.md)
- Fehlerbehebung: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- Docs-Inventar/Klassifizierung: [`docs/docs-inventory.md`](docs/docs-inventory.md)
- PR/Issue-Triage-Snapshot (Stand 18. Feb. 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
Hauptzusammenarbeitsreferenzen:
- Dokumentations-Hub: [docs/README.md](docs/README.md)
- Dokumentationsvorlage: [docs/doc-template.md](docs/doc-template.md)
- Dokumentationsänderungs-Checkliste: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
- Channel-Konfigurationsreferenz: [docs/channels-reference.md](docs/channels-reference.md)
- Matrix-verschlüsselte Raum-Operationen: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
- Beitragsleitfaden: [CONTRIBUTING.md](CONTRIBUTING.md)
- PR-Workflow-Richtlinie: [docs/pr-workflow.md](docs/pr-workflow.md)
- Reviewer-Playbook (Triage + Tiefenreview): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
- Eigentums- und CI-Triage-Map: [docs/ci-map.md](docs/ci-map.md)
- Sicherheits-Offenlegungsrichtlinie: [SECURITY.md](SECURITY.md)
Für Deployment und Runtime-Betrieb:
- Netzwerk-Deployment-Leitfaden: [docs/network-deployment.md](docs/network-deployment.md)
- Proxy-Agent-Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
## ZeroClaw unterstützen
Wenn ZeroClaw deine Arbeit hilft und du die kontinuierliche Entwicklung unterstützen möchtest, kannst du hier spenden:
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Kauf mir einen Kaffee" /></a>
### 🙏 Besonderer Dank
Ein herzliches Dankeschön an die Gemeinschaften und Institutionen, die diese Open-Source-Arbeit inspirieren und unterstützen:
- **Harvard University** — für die Förderung intellektueller Neugier und das Erweitern der Grenzen des Möglichen.
- **MIT** — für das Eintreten für offenes Wissen, Open Source und die Überzeugung, dass Technologie für alle zugänglich sein sollte.
- **Sundai Club** — für die Gemeinschaft, die Energie und den unermüdlichen Willen, Dinge zu bauen, die zählen.
- **Die Welt und Darüber Hinaus** 🌍✨ — an jeden Mitwirkenden, Träumer und Erbauer da draußen, der Open Source zu einer Kraft für das Gute macht. Das ist für dich.
Wir bauen in Open Source, weil die besten Ideen von überall kommen. Wenn du das liest, bist du Teil davon. Willkommen. 🦀❤️
## ⚠️ Offizielles Repository und Fälschungswarnung
**Dies ist das einzige offizielle ZeroClaw-Repository:**
> <https://github.com/zeroclaw-labs/zeroclaw>
Jedes andere Repository, Organisation, Domain oder Paket, das behauptet "ZeroClaw" zu sein oder eine Verbindung zu ZeroClaw Labs zu implizieren, ist **nicht autorisiert und nicht mit diesem Projekt verbunden**. Bekannte nicht autorisierte Forks werden in [TRADEMARK.md](TRADEMARK.md) aufgeführt.
Wenn du auf Fälschung oder Markenmissbrauch stößt, bitte [öffne ein Issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## Lizenz
ZeroClaw ist doppelt lizenziert für maximale Offenheit und Contributorschutz:
| Lizenz | Anwendungsfälle |
| ---------------------------- | ------------------------------------------------------------ |
| [MIT](LICENSE-MIT) | Open-Source, Forschung, akademisch, persönliche Nutzung |
| [Apache 2.0](LICENSE-APACHE) | Patentschutz, institutionell, kommerzielles Deployment |
Du kannst eine der beiden Lizenzen wählen. **Contributors gewähren automatisch Rechte unter beiden** — siehe [CLA.md](CLA.md) für die vollständige Contributor-Vereinbarung.
### Marke
Der Name **ZeroClaw** und das Logo sind eingetragene Marken von ZeroClaw Labs. Diese Lizenz gewährt keine Erlaubnis, sie zu verwenden, um Befürwortung oder Verbindung zu implizieren. Siehe [TRADEMARK.md](TRADEMARK.md) für erlaubte und verbotene Verwendungen.
### Contributorschutz
- Du **behältst das Urheberrecht** an deinen Beiträgen
- **Patentgewährung** (Apache 2.0) schützt dich vor Patentansprüchen anderer Contributors
- Deine Beiträge werden **dauerhaft zugeschrieben** in der Commit-Historie und [NOTICE](NOTICE)
- Keine Markenrechte werden durch Beiträge übertragen
## Mitwirken
Siehe [CONTRIBUTING.md](CONTRIBUTING.md) und [CLA.md](CLA.md). Implementiere einen Trait, reiche eine PR ein:
- CI-Workflow-Leitfaden: [docs/ci-map.md](docs/ci-map.md)
- Neuer `Provider``src/providers/`
- Neuer `Channel``src/channels/`
- Neuer `Observer``src/observability/`
- Neues `Tool``src/tools/`
- Neuer `Memory``src/memory/`
- Neuer `Tunnel``src/tunnel/`
- Neuer `Skill``~/.zeroclaw/workspace/skills/<n>/`
---
**ZeroClaw** — Null Overhead. Null Kompromiss. Deploy überall. Tausche alles. 🦀
## Stern-Historie
<p align="center">
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
<img alt="Stern-Historie-Diagramm" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
</picture>
</a>
</p>
+178
View File
@@ -0,0 +1,178 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Μηδενικό overhead. Μηδενικός συμβιβασμός. 100% Rust. 100% Αγνωστικιστικό.</strong><br>
⚡️ <strong>Εκτελείται σε hardware $10 με <5MB RAM: Αυτό είναι 99% λιγότερη μνήμη από το OpenClaw και 98% φθηνότερο από ένα Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center">
🌐 <strong>Γλώσσες:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
> **📝 Σημείωση:** Αυτό είναι ένα συνοπτικό README στα ελληνικά. Για πλήρη τεκμηρίωση, ανατρέξτε στο [αγγλικό README](README.md). Οι σύνδεσμοι τεκμηρίωσης παραπέμπουν στην αγγλική τεκμηρίωση.
## Τι είναι το ZeroClaw;
Το ZeroClaw είναι μια ελαφριά, μεταβλητή και επεκτάσιμη υποδομή AI βοηθού χτισμένη σε Rust. Συνδέει διάφορους παρόχους LLM (Anthropic, OpenAI, Google, Ollama, κλπ.) μέσω μιας ενοποιημένης διεπαφής και υποστηρίζει πολλαπλά κανάλια (Telegram, Matrix, CLI, κλπ.).
### Κύρια Χαρακτηριστικά
- **🦀 Γραμμένο σε Rust**: Υψηλή απόδοση, ασφάλεια μνήμης και αφαιρέσεις μηδενικού κόστους
- **🔌 Αγνωστικιστικό προς παρόχους**: Υποστηρίζει OpenAI, Anthropic, Google Gemini, Ollama και άλλους
- **📱 Πολυκάναλο**: Telegram, Matrix (με E2EE), CLI και άλλα
- **🧠 Προσαρμόσιμη μνήμη**: SQLite και Markdown backends
- **🛠️ Επεκτάσιμα εργαλεία**: Προσθέστε εύκολα προσαρμοσμένα εργαλεία
- **🔒 Ασφάλεια πρώτα**: Αντίστροφος proxy, σχεδιασμός προσανατολισμένος στο απόρρητο
---
## Γρήγορη Εκκίνηση
### Απαιτήσεις
- Rust 1.70+
- Ένα κλειδί API παρόχου LLM (Anthropic, OpenAI, κλπ.)
### Εγκατάσταση
```bash
# Κλωνοποιήστε το repository
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# Κατασκευή
cargo build --release
# Εκτέλεση
cargo run --release
```
### Με Docker
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## Ρύθμιση
Το ZeroClaw χρησιμοποιεί ένα αρχείο ρύθμισης YAML. Από προεπιλογή, αναζητά το `config.yaml`.
```yaml
# Προεπιλεγμένος πάροχος
provider: anthropic
# Ρύθμιση παρόχων
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# Ρύθμιση μνήμης
memory:
backend: sqlite
path: data/memory.db
# Ρύθμιση καναλιών
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## Τεκμηρίωση
Για λεπτομερή τεκμηρίωση, δείτε:
- [Κόμβος Τεκμηρίωσης](docs/README.md)
- [Αναφορά Εντολών](docs/commands-reference.md)
- [Αναφορά Παρόχων](docs/providers-reference.md)
- [Αναφορά Καναλιών](docs/channels-reference.md)
- [Αναφορά Ρυθμίσεων](docs/config-reference.md)
---
## Συνεισφορά
Οι συνεισφορές είναι ευπρόσδεκτες! Παρακαλώ διαβάστε τον [Οδηγό Συνεισφοράς](CONTRIBUTING.md).
---
## Άδεια
Αυτό το έργο έχει διπλή άδεια:
- MIT License
- Apache License, έκδοση 2.0
Δείτε τα [LICENSE-APACHE](LICENSE-APACHE) και [LICENSE-MIT](LICENSE-MIT) για λεπτομέρειες.
---
## Κοινότητα
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## Χορηγοί
Αν το ZeroClaw είναι χρήσιμο για εσάς, παρακαλώ σκεφτείτε να μας αγοράσετε έναν καφέ:
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)
+914
View File
@@ -0,0 +1,914 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Cero sobrecarga. Cero compromiso. 100% Rust. 100% Agnóstico.</strong><br>
⚡️ <strong>¡Ejecuta en hardware de $10 con <5MB de RAM: ¡Eso es 99% menos memoria que OpenClaw y 98% más barato que un Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
</p>
<p align="center">
Construido por estudiantes y miembros de las comunidades de Harvard, MIT y Sundai.Club.
</p>
<p align="center">
🌐 <strong>Idiomas:</strong><a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
<p align="center">
<a href="#inicio-rápido">Inicio Rápido</a> |
<a href="bootstrap.sh">Configuración con Un Clic</a> |
<a href="docs/README.md">Hub de Documentación</a> |
<a href="docs/SUMMARY.md">Tabla de Contenidos de Documentación</a>
</p>
<p align="center">
<strong>Accesos rápidos:</strong>
<a href="docs/reference/README.md">Referencia</a> ·
<a href="docs/operations/README.md">Operaciones</a> ·
<a href="docs/troubleshooting.md">Solución de Problemas</a> ·
<a href="docs/security/README.md">Seguridad</a> ·
<a href="docs/hardware/README.md">Hardware</a> ·
<a href="docs/contributing/README.md">Contribuir</a>
</p>
<p align="center">
<strong>Infraestructura de asistente AI rápida, ligera y completamente autónoma</strong><br />
Despliega en cualquier lugar. Intercambia cualquier cosa.
</p>
<p align="center">
ZeroClaw es el <strong>sistema operativo de runtime</strong> para flujos de trabajo de agentes — una infraestructura que abstrae modelos, herramientas, memoria y ejecución para construir agentes una vez y ejecutarlos en cualquier lugar.
</p>
<p align="center"><code>Arquitectura basada en traits · runtime seguro por defecto · proveedor/canal/herramienta intercambiables · todo es conectable</code></p>
### 📢 Anuncios
Usa esta tabla para avisos importantes (cambios de compatibilidad, avisos de seguridad, ventanas de mantenimiento y bloqueos de versión).
| Fecha (UTC) | Nivel | Aviso | Acción |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Crítico_ | **No estamos afiliados** con `openagen/zeroclaw` o `zeroclaw.org`. El dominio `zeroclaw.org` apunta actualmente al fork `openagen/zeroclaw`, y este dominio/repositorio está suplantando nuestro sitio web/proyecto oficial. | No confíes en información, binarios, recaudaciones de fondos o anuncios de estas fuentes. Usa solo [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) y nuestras cuentas sociales verificadas. |
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
| 2026-02-19 | _Importante_ | Anthropic actualizó los términos de uso de autenticación y credenciales el 2026-02-19. La autenticación OAuth (Free, Pro, Max) es exclusivamente para Claude Code y Claude.ai; el uso de tokens OAuth de Claude Free/Pro/Max en cualquier otro producto, herramienta o servicio (incluyendo Agent SDK) no está permitido y puede violar los Términos de Uso del Consumidor. | Por favor, evita temporalmente las integraciones OAuth de Claude Code para prevenir cualquier pérdida potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Características
- 🏎️ **Runtime Ligero por Defecto:** Los flujos de trabajo CLI comunes y comandos de estado se ejecutan dentro de un espacio de memoria de pocos megabytes en builds de producción.
- 💰 **Despliegue Económico:** Diseñado para placas de bajo costo e instancias cloud pequeñas sin dependencias de runtime pesadas.
- ⚡ **Inicios en Frío Rápidos:** El runtime Rust de binario único mantiene el inicio de comandos y demonios casi instantáneo para operaciones diarias.
- 🌍 **Arquitectura Portátil:** Un flujo de trabajo de binario único en ARM, x86 y RISC-V con proveedor/canal/herramienta intercambiables.
### Por qué los equipos eligen ZeroClaw
- **Ligero por defecto:** binario Rust pequeño, inicio rápido, huella de memoria baja.
- **Seguro por diseño:** emparejamiento, sandboxing estricto, listas permitidas explícitas, alcance de workspace.
- **Completamente intercambiable:** los sistemas centrales son traits (proveedores, canales, herramientas, memoria, túneles).
- **Sin lock-in de proveedor:** soporte de proveedor compatible con OpenAI + endpoints personalizados conectables.
## Instantánea de Benchmark (ZeroClaw vs OpenClaw, Reproducible)
Benchmark rápido en máquina local (macOS arm64, feb. 2026) normalizado para hardware edge de 0.8 GHz.
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
| **Lenguaje** | TypeScript | Python | Go | **Rust** |
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
| **Inicio (núcleo 0.8 GHz)** | > 500s | > 30s | < 1s | **< 10ms** |
| **Tamaño Binario** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
| **Costo** | Mac Mini $599 | Linux SBC ~$50 | Placa Linux $10 | **Cualquier hardware $10** |
> Notas: Los resultados de ZeroClaw se miden en builds de producción usando `/usr/bin/time -l`. OpenClaw requiere el runtime Node.js (típicamente ~390 MB de sobrecarga de memoria adicional), mientras que NanoBot requiere el runtime Python. PicoClaw y ZeroClaw son binarios estáticos. Las cifras de RAM anteriores son memoria de runtime; los requisitos de compilación en tiempo de build son mayores.
<p align="center">
<img src="zero-claw.jpeg" alt="Comparación ZeroClaw vs OpenClaw" width="800" />
</p>
### Medición Local Reproducible
Las afirmaciones de benchmark pueden derivar a medida que el código y las toolchains evolucionan, así que siempre mide tu build actual localmente:
```bash
cargo build --release
ls -lh target/release/zeroclaw
/usr/bin/time -l target/release/zeroclaw --help
/usr/bin/time -l target/release/zeroclaw status
```
Ejemplo de muestra (macOS arm64, medido el 18 de febrero de 2026):
- Tamaño de binario release: `8.8M`
- `zeroclaw --help`: tiempo real aprox `0.02s`, huella de memoria máxima ~`3.9 MB`
- `zeroclaw status`: tiempo real aprox `0.01s`, huella de memoria máxima ~`4.1 MB`
## Requisitos Previos
<details>
<summary><strong>Windows</strong></summary>
### Windows — Requerido
1. **Visual Studio Build Tools** (proporciona el linker MSVC y el Windows SDK):
```powershell
winget install Microsoft.VisualStudio.2022.BuildTools
```
Durante la instalación (o a través de Visual Studio Installer), selecciona la carga de trabajo **"Desarrollo de escritorio con C++"**.
2. **Toolchain Rust:**
```powershell
winget install Rustlang.Rustup
```
Después de la instalación, abre una nueva terminal y ejecuta `rustup default stable` para asegurar que la toolchain estable esté activa.
3. **Verifica** que ambos funcionan:
```powershell
rustc --version
cargo --version
```
### Windows — Opcional
- **Docker Desktop** — requerido solo si usas el [runtime sandboxed Docker](#soporte-de-runtime-actual) (`runtime.kind = "docker"`). Instala vía `winget install Docker.DockerDesktop`.
</details>
<details>
<summary><strong>Linux / macOS</strong></summary>
### Linux / macOS — Requerido
1. **Herramientas de compilación esenciales:**
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
- **macOS:** Instala Xcode Command Line Tools: `xcode-select --install`
2. **Toolchain Rust:**
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Ver [rustup.rs](https://rustup.rs) para detalles.
3. **Verifica:**
```bash
rustc --version
cargo --version
```
### Linux / macOS — Opcional
- **Docker** — requerido solo si usas el [runtime sandboxed Docker](#soporte-de-runtime-actual) (`runtime.kind = "docker"`).
- **Linux (Debian/Ubuntu):** ver [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
- **Linux (Fedora/RHEL):** ver [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
- **macOS:** instala Docker Desktop vía [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
</details>
## Inicio Rápido
### Opción 1: Configuración automatizada (recomendada)
El script `bootstrap.sh` instala Rust, clona ZeroClaw, lo compila, y configura tu entorno de desarrollo inicial:
```bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
```
Esto:
1. Instalará Rust (si no está presente)
2. Clonará el repositorio ZeroClaw
3. Compilará ZeroClaw en modo release
4. Instalará `zeroclaw` en `~/.cargo/bin/`
5. Creará la estructura de workspace por defecto en `~/.zeroclaw/workspace/`
6. Generará un archivo de configuración inicial `~/.zeroclaw/workspace/config.toml`
Después del bootstrap, recarga tu shell o ejecuta `source ~/.cargo/env` para usar el comando `zeroclaw` globalmente.
### Opción 2: Instalación manual
<details>
<summary><strong>Clic para ver los pasos de instalación manual</strong></summary>
```bash
# 1. Clona el repositorio
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# 2. Compila en release
cargo build --release --locked
# 3. Instala el binario
cargo install --path . --locked
# 4. Inicializa el workspace
zeroclaw init
# 5. Verifica la instalación
zeroclaw --version
zeroclaw status
```
</details>
### Después de la instalación
Una vez instalado (vía bootstrap o manualmente), deberías ver:
```
~/.zeroclaw/workspace/
├── config.toml # Configuración principal
├── .pairing # Secretos de emparejamiento (generado al primer inicio)
├── logs/ # Logs de daemon/agent
├── skills/ # Habilidades personalizadas
└── memory/ # Almacenamiento de contexto conversacional
```
**Siguientes pasos:**
1. Configura tus proveedores de AI en `~/.zeroclaw/workspace/config.toml`
2. Revisa la [referencia de configuración](docs/config-reference.md) para opciones avanzadas
3. Inicia el agente: `zeroclaw agent start`
4. Prueba vía tu canal preferido (ver [referencia de canales](docs/channels-reference.md))
## Configuración
Edita `~/.zeroclaw/workspace/config.toml` para configurar proveedores, canales y comportamiento del sistema.
### Referencia de Configuración Rápida
```toml
[providers.anthropic]
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
api_key = "sk-..."
model = "gpt-4o"
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@bot:matrix.org"
password = "..."
[memory]
kind = "markdown" # o "sqlite" o "none"
[runtime]
kind = "native" # o "docker" (requiere Docker)
```
**Documentos de referencia completos:**
- [Referencia de Configuración](docs/config-reference.md) — todos los ajustes, validaciones, valores por defecto
- [Referencia de Proveedores](docs/providers-reference.md) — configuraciones específicas de proveedores de AI
- [Referencia de Canales](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord y más
- [Operaciones](docs/operations-runbook.md) — monitoreo en producción, rotación de secretos, escalado
### Soporte de Runtime (actual)
ZeroClaw soporta dos backends de ejecución de código:
- **`native`** (por defecto) — ejecución de proceso directo, camino más rápido, ideal para entornos de confianza
- **`docker`** — aislamiento completo de contenedor, políticas de seguridad reforzadas, requiere Docker
Usa `runtime.kind = "docker"` si necesitas sandboxing estricto o aislamiento de red. Ver [referencia de configuración](docs/config-reference.md#runtime) para detalles completos.
## Comandos
```bash
# Gestión de workspace
zeroclaw init # Inicializa un nuevo workspace
zeroclaw status # Muestra estado de daemon/agent
zeroclaw config validate # Verifica sintaxis y valores de config.toml
# Gestión de daemon
zeroclaw daemon start # Inicia el daemon en segundo plano
zeroclaw daemon stop # Detiene el daemon en ejecución
zeroclaw daemon restart # Reinicia el daemon (recarga de config)
zeroclaw daemon logs # Muestra logs del daemon
# Gestión de agent
zeroclaw agent start # Inicia el agent (requiere daemon ejecutándose)
zeroclaw agent stop # Detiene el agent
zeroclaw agent restart # Reinicia el agent (recarga de config)
# Operaciones de emparejamiento
zeroclaw pairing init # Genera un nuevo secreto de emparejamiento
zeroclaw pairing rotate # Rota el secreto de emparejamiento existente
# Tunneling (para exposición pública)
zeroclaw tunnel start # Inicia un tunnel hacia el daemon local
zeroclaw tunnel stop # Detiene el tunnel activo
# Diagnóstico
zeroclaw doctor # Ejecuta verificaciones de salud del sistema
zeroclaw version # Muestra versión e información de build
```
Ver [Referencia de Comandos](docs/commands-reference.md) para opciones y ejemplos completos.
## Arquitectura
```
┌─────────────────────────────────────────────────────────────────┐
│ Canales (trait) │
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
└─────────────────────────┬───────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Orquestador Agent │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Ruteo │ │ Contexto │ │ Ejecución │ │
│ │ Mensaje │ │ Memoria │ │ Herramienta│ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
└─────────────────────────┬───────────────────────────────────────┘
┌───────────────┼───────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Proveedores │ │ Memoria │ │ Herramientas │
│ (trait) │ │ (trait) │ │ (trait) │
├──────────────┤ ├──────────────┤ ├──────────────┤
│ Anthropic │ │ Markdown │ │ Filesystem │
│ OpenAI │ │ SQLite │ │ Bash │
│ Gemini │ │ None │ │ Web Fetch │
│ Ollama │ │ Custom │ │ Custom │
│ Custom │ └──────────────┘ └──────────────┘
└──────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Runtime (trait) │
│ Native │ Docker │
└─────────────────────────────────────────────────────────────────┘
```
**Principios clave:**
- Todo es un **trait** — proveedores, canales, herramientas, memoria, túneles
- Los canales llaman al orquestador; el orquestador llama a proveedores + herramientas
- El sistema de memoria gestiona contexto conversacional (markdown, SQLite, o ninguno)
- El runtime abstrae la ejecución de código (nativo o Docker)
- Sin lock-in de proveedor — intercambia Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama sin cambios de código
Ver [documentación de arquitectura](docs/architecture.svg) para diagramas detallados y detalles de implementación.
## Ejemplos
### Bot de Telegram
```toml
[channels.telegram]
enabled = true
bot_token = "123456:ABC-DEF..."
allowed_users = [987654321] # Tu ID de usuario de Telegram
```
Inicia el daemon + agent, luego envía un mensaje a tu bot en Telegram:
```
/start
¡Hola! ¿Podrías ayudarme a escribir un script Python?
```
El bot responde con código generado por AI, ejecuta herramientas si se solicita, y mantiene el contexto de conversación.
### Matrix (cifrado extremo a extremo)
```toml
[channels.matrix]
enabled = true
homeserver_url = "https://matrix.org"
username = "@zeroclaw:matrix.org"
password = "..."
device_name = "zeroclaw-prod"
e2ee_enabled = true
```
Invita a `@zeroclaw:matrix.org` a una sala cifrada, y el bot responderá con cifrado completo. Ver [Guía Matrix E2EE](docs/matrix-e2ee-guide.md) para configuración de verificación de dispositivo.
### Multi-Proveedor
```toml
[providers.anthropic]
enabled = true
api_key = "sk-ant-..."
model = "claude-sonnet-4-20250514"
[providers.openai]
enabled = true
api_key = "sk-..."
model = "gpt-4o"
[orchestrator]
default_provider = "anthropic"
fallback_providers = ["openai"] # Failover en error de proveedor
```
Si Anthropic falla o tiene rate-limit, el orquestador hace failover automáticamente a OpenAI.
### Memoria Personalizada
```toml
[memory]
kind = "sqlite"
path = "~/.zeroclaw/workspace/memory/conversations.db"
retention_days = 90 # Purga automática después de 90 días
```
O usa Markdown para almacenamiento legible por humanos:
```toml
[memory]
kind = "markdown"
path = "~/.zeroclaw/workspace/memory/"
```
Ver [Referencia de Configuración](docs/config-reference.md#memory) para todas las opciones de memoria.
## Soporte de Proveedor
| Proveedor | Estado | API Key | Modelos de Ejemplo |
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
| **Anthropic** | ✅ Estable | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
| **OpenAI** | ✅ Estable | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
| **Google Gemini** | ✅ Estable | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
| **Ollama** | ✅ Estable | N/A (local) | `llama3.3`, `qwen2.5`, `phi4` |
| **Cerebras** | ✅ Estable | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
| **Groq** | ✅ Estable | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
| **Mistral** | 🚧 Planificado | `MISTRAL_API_KEY` | TBD |
| **Cohere** | 🚧 Planificado | `COHERE_API_KEY` | TBD |
### Endpoints Personalizados
ZeroClaw soporta endpoints compatibles con OpenAI:
```toml
[providers.custom]
enabled = true
api_key = "..."
base_url = "https://api.your-llm-provider.com/v1"
model = "your-model-name"
```
Ejemplo: usa [LiteLLM](https://github.com/BerriAI/litellm) como proxy para acceder a cualquier LLM vía interfaz OpenAI.
Ver [Referencia de Proveedores](docs/providers-reference.md) para detalles de configuración completos.
## Soporte de Canal
| Canal | Estado | Autenticación | Notas |
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
| **Telegram** | ✅ Estable | Bot Token | Soporte completo incluyendo archivos, imágenes, botones inline |
| **Matrix** | ✅ Estable | Contraseña o Token | Soporte E2EE con verificación de dispositivo |
| **Slack** | 🚧 Planificado | OAuth o Bot Token | Requiere acceso a workspace |
| **Discord** | 🚧 Planificado | Bot Token | Requiere permisos de guild |
| **WhatsApp** | 🚧 Planificado | Twilio o API oficial | Requiere cuenta business |
| **CLI** | ✅ Estable | Ninguno | Interfaz conversacional directa |
| **Web** | 🚧 Planificado | API Key o OAuth | Interfaz de chat basada en navegador |
Ver [Referencia de Canales](docs/channels-reference.md) para instrucciones de configuración completas.
## Soporte de Herramientas
ZeroClaw proporciona herramientas integradas para ejecución de código, acceso al sistema de archivos y recuperación web:
| Herramienta | Descripción | Runtime Requerido |
| -------------------- | --------------------------- | ----------------------------- |
| **bash** | Ejecuta comandos shell | Nativo o Docker |
| **python** | Ejecuta scripts Python | Python 3.8+ (nativo) o Docker |
| **javascript** | Ejecuta código Node.js | Node.js 18+ (nativo) o Docker |
| **filesystem_read** | Lee archivos | Nativo o Docker |
| **filesystem_write** | Escribe archivos | Nativo o Docker |
| **web_fetch** | Obtiene contenido web | Nativo o Docker |
### Seguridad de Ejecución
- **Runtime Nativo** — se ejecuta como proceso de usuario del daemon, acceso completo al sistema de archivos
- **Runtime Docker** — aislamiento completo de contenedor, sistemas de archivos y redes separados
Configura la política de ejecución en `config.toml`:
```toml
[runtime]
kind = "docker"
allowed_tools = ["bash", "python", "filesystem_read"] # Lista permitida explícita
```
Ver [Referencia de Configuración](docs/config-reference.md#runtime) para opciones de seguridad completas.
## Despliegue
### Despliegue Local (Desarrollo)
```bash
zeroclaw daemon start
zeroclaw agent start
```
### Despliegue en Servidor (Producción)
Usa systemd para gestionar el daemon y agent como servicios:
```bash
# Instala el binario
cargo install --path . --locked
# Configura el workspace
zeroclaw init
# Crea archivos de servicio systemd
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
# Habilita e inicia los servicios
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
sudo systemctl start zeroclaw-daemon zeroclaw-agent
# Verifica el estado
sudo systemctl status zeroclaw-daemon
sudo systemctl status zeroclaw-agent
```
Ver [Guía de Despliegue de Red](docs/network-deployment.md) para instrucciones completas de despliegue en producción.
### Docker
```bash
# Compila la imagen
docker build -t zeroclaw:latest .
# Ejecuta el contenedor
docker run -d \
--name zeroclaw \
-v ~/.zeroclaw/workspace:/workspace \
-e ANTHROPIC_API_KEY=sk-ant-... \
zeroclaw:latest
```
Ver [`Dockerfile`](Dockerfile) para detalles de build y opciones de configuración.
### Hardware Edge
ZeroClaw está diseñado para ejecutarse en hardware de bajo consumo:
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, núcleo ARMv8 único, < $5 costo de hardware
- **Raspberry Pi 4/5** — 1 GB+ RAM, multi-núcleo, ideal para workloads concurrentes
- **Orange Pi Zero 2** — ~512 MB RAM, quad-core ARMv8, costo ultra-bajo
- **SBCs x86 (Intel N100)** — 4-8 GB RAM, builds rápidos, soporte Docker nativo
Ver [Guía de Hardware](docs/hardware/README.md) para instrucciones de configuración específicas por dispositivo.
## Tunneling (Exposición Pública)
Expón tu daemon ZeroClaw local a la red pública vía túneles seguros:
```bash
zeroclaw tunnel start --provider cloudflare
```
Proveedores de tunnel soportados:
- **Cloudflare Tunnel** — HTTPS gratis, sin exposición de puertos, soporte multi-dominio
- **Ngrok** — configuración rápida, dominios personalizados (plan de pago)
- **Tailscale** — red mesh privada, sin puerto público
Ver [Referencia de Configuración](docs/config-reference.md#tunnel) para opciones de configuración completas.
## Seguridad
ZeroClaw implementa múltiples capas de seguridad:
### Emparejamiento
El daemon genera un secreto de emparejamiento al primer inicio almacenado en `~/.zeroclaw/workspace/.pairing`. Los clientes (agent, CLI) deben presentar este secreto para conectarse.
```bash
zeroclaw pairing rotate # Genera un nuevo secreto e invalida el anterior
```
### Sandboxing
- **Runtime Docker** — aislamiento completo de contenedor con sistemas de archivos y redes separados
- **Runtime Nativo** — se ejecuta como proceso de usuario, con alcance de workspace por defecto
### Listas Permitidas
Los canales pueden restringir acceso por ID de usuario:
```toml
[channels.telegram]
enabled = true
allowed_users = [123456789, 987654321] # Lista permitida explícita
```
### Cifrado
- **Matrix E2EE** — cifrado extremo a extremo completo con verificación de dispositivo
- **Transporte TLS** — todo el tráfico de API y tunnel usa HTTPS/TLS
Ver [Documentación de Seguridad](docs/security/README.md) para políticas y prácticas completas.
## Observabilidad
ZeroClaw registra logs en `~/.zeroclaw/workspace/logs/` por defecto. Los logs se almacenan por componente:
```
~/.zeroclaw/workspace/logs/
├── daemon.log # Logs del daemon (inicio, solicitudes API, errores)
├── agent.log # Logs del agent (ruteo de mensajes, ejecución de herramientas)
├── telegram.log # Logs específicos del canal (si está habilitado)
└── matrix.log # Logs específicos del canal (si está habilitado)
```
### Configuración de Logging
```toml
[logging]
level = "info" # debug, info, warn, error
path = "~/.zeroclaw/workspace/logs/"
rotation = "daily" # daily, hourly, size
max_size_mb = 100 # Para rotación basada en tamaño
retention_days = 30 # Purga automática después de N días
```
Ver [Referencia de Configuración](docs/config-reference.md#logging) para todas las opciones de logging.
### Métricas (Planificado)
Soporte de métricas Prometheus para monitoreo en producción próximamente. Seguimiento en [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
## Habilidades (Skills)
ZeroClaw soporta habilidades personalizadas — módulos reutilizables que extienden las capacidades del sistema.
### Definición de Habilidad
Las habilidades se almacenan en `~/.zeroclaw/workspace/skills/<skill-name>/` con esta estructura:
```
skills/
└── my-skill/
├── skill.toml # Metadatos de habilidad (nombre, descripción, dependencias)
├── prompt.md # Prompt de sistema para la AI
└── tools/ # Herramientas personalizadas opcionales
└── my_tool.py
```
### Ejemplo de Habilidad
```toml
# skills/web-research/skill.toml
[skill]
name = "web-research"
description = "Busca en la web y resume resultados"
version = "1.0.0"
[dependencies]
tools = ["web_fetch", "bash"]
```
```markdown
<!-- skills/web-research/prompt.md -->
Eres un asistente de investigación. Cuando te pidan buscar algo:
1. Usa web_fetch para obtener el contenido
2. Resume los resultados en un formato fácil de leer
3. Cita las fuentes con URLs
```
### Uso de Habilidades
Las habilidades se cargan automáticamente al inicio del agent. Referéncialas por nombre en conversaciones:
```
Usuario: Usa la habilidad web-research para encontrar las últimas noticias de AI
Bot: [carga la habilidad web-research, ejecuta web_fetch, resume resultados]
```
Ver sección [Habilidades (Skills)](#habilidades-skills) para instrucciones completas de creación de habilidades.
## Open Skills
ZeroClaw soporta [Open Skills](https://github.com/openagents-com/open-skills) — un sistema modular y agnóstico de proveedores para extender capacidades de agentes AI.
### Habilitar Open Skills
```toml
[skills]
open_skills_enabled = true
# open_skills_dir = "/path/to/open-skills" # opcional
```
También puedes sobrescribir en runtime con `ZEROCLAW_OPEN_SKILLS_ENABLED` y `ZEROCLAW_OPEN_SKILLS_DIR`.
## Desarrollo
```bash
cargo build # Build de desarrollo
cargo build --release # Build release (codegen-units=1, funciona en todos los dispositivos incluyendo Raspberry Pi)
cargo build --profile release-fast # Build más rápido (codegen-units=8, requiere 16 GB+ RAM)
cargo test # Ejecuta el suite de pruebas completo
cargo clippy --locked --all-targets -- -D clippy::correctness
cargo fmt # Formato
# Ejecuta el benchmark de comparación SQLite vs Markdown
cargo test --test memory_comparison -- --nocapture
```
### Hook pre-push
Un hook de git ejecuta `cargo fmt --check`, `cargo clippy -- -D warnings`, y `cargo test` antes de cada push. Actívalo una vez:
```bash
git config core.hooksPath .githooks
```
### Solución de Problemas de Build (errores OpenSSL en Linux)
Si encuentras un error de build `openssl-sys`, sincroniza dependencias y recompila con el lockfile del repositorio:
```bash
git pull
cargo build --release --locked
cargo install --path . --force --locked
```
ZeroClaw está configurado para usar `rustls` para dependencias HTTP/TLS; `--locked` mantiene el grafo transitivo determinista en entornos limpios.
Para saltar el hook cuando necesites un push rápido durante desarrollo:
```bash
git push --no-verify
```
## Colaboración y Docs
Comienza con el hub de documentación para un mapa basado en tareas:
- Hub de Documentación: [`docs/README.md`](docs/README.md)
- Tabla de Contenidos Unificada de Docs: [`docs/SUMMARY.md`](docs/SUMMARY.md)
- Referencia de Comandos: [`docs/commands-reference.md`](docs/commands-reference.md)
- Referencia de Configuración: [`docs/config-reference.md`](docs/config-reference.md)
- Referencia de Proveedores: [`docs/providers-reference.md`](docs/providers-reference.md)
- Referencia de Canales: [`docs/channels-reference.md`](docs/channels-reference.md)
- Runbook de Operaciones: [`docs/operations-runbook.md`](docs/operations-runbook.md)
- Solución de Problemas: [`docs/troubleshooting.md`](docs/troubleshooting.md)
- Inventario/Clasificación de Docs: [`docs/docs-inventory.md`](docs/docs-inventory.md)
- Snapshot de Triage de PR/Issue (al 18 de feb. de 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
Referencias principales de colaboración:
- Hub de Documentación: [docs/README.md](docs/README.md)
- Plantilla de Documentación: [docs/doc-template.md](docs/doc-template.md)
- Checklist de Cambio de Documentación: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
- Referencia de Configuración de Canales: [docs/channels-reference.md](docs/channels-reference.md)
- Operaciones de Salas Cifradas Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
- Guía de Contribución: [CONTRIBUTING.md](CONTRIBUTING.md)
- Política de Flujo de Trabajo PR: [docs/pr-workflow.md](docs/pr-workflow.md)
- Playbook del Revisor (triage + revisión profunda): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
- Mapa de Propiedad y Triage CI: [docs/ci-map.md](docs/ci-map.md)
- Política de Divulgación de Seguridad: [SECURITY.md](SECURITY.md)
Para despliegue y operaciones de runtime:
- Guía de Despliegue de Red: [docs/network-deployment.md](docs/network-deployment.md)
- Playbook de Agent Proxy: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
## Apoyar a ZeroClaw
Si ZeroClaw ayuda a tu trabajo y deseas apoyar el desarrollo continuo, puedes donar aquí:
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Cómprame un Café" /></a>
### 🙏 Agradecimientos Especiales
Un sincero agradecimiento a las comunidades e instituciones que inspiran y alimentan este trabajo de código abierto:
- **Harvard University** — por fomentar la curiosidad intelectual y empujar los límites de lo posible.
- **MIT** — por defender el conocimiento abierto, el código abierto, y la convicción de que la tecnología debería ser accesible para todos.
- **Sundai Club** — por la comunidad, la energía, y la voluntad incesante de construir cosas que importan.
- **El Mundo y Más Allá** 🌍✨ — a cada contribuyente, soñador, y constructor allá afuera que hace del código abierto una fuerza para el bien. Esto es por ti.
Construimos en código abierto porque las mejores ideas vienen de todas partes. Si estás leyendo esto, eres parte de esto. Bienvenido. 🦀❤️
## ⚠️ Repositorio Oficial y Advertencia de Suplantación
**Este es el único repositorio oficial de ZeroClaw:**
> <https://github.com/zeroclaw-labs/zeroclaw>
Cualquier otro repositorio, organización, dominio o paquete que afirme ser "ZeroClaw" o que implique afiliación con ZeroClaw Labs es **no autorizado y no está afiliado con este proyecto**. Los forks no autorizados conocidos serán listados en [TRADEMARK.md](TRADEMARK.md).
Si encuentras suplantación o uso indebido de marca, por favor [abre un issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
---
## Licencia
ZeroClaw tiene doble licencia para máxima apertura y protección de contribuyentes:
| Licencia | Casos de Uso |
| ---------------------------- | ------------------------------------------------------------ |
| [MIT](LICENSE-MIT) | Código abierto, investigación, académico, uso personal |
| [Apache 2.0](LICENSE-APACHE) | Protección de patentes, institucional, despliegue comercial |
Puedes elegir cualquiera de las dos licencias. **Los contribuyentes otorgan automáticamente derechos bajo ambas** — ver [CLA.md](CLA.md) para el acuerdo de contribuyente completo.
### Marca
El nombre **ZeroClaw** y el logo son marcas registradas de ZeroClaw Labs. Esta licencia no otorga permiso para usarlos para implicar aprobación o afiliación. Ver [TRADEMARK.md](TRADEMARK.md) para usos permitidos y prohibidos.
### Protecciones del Contribuyente
- **Mantienes los derechos de autor** de tus contribuciones
- **Concesión de patentes** (Apache 2.0) te protege contra reclamos de patentes por otros contribuyentes
- Tus contribuciones son **atribuidas permanentemente** en el historial de commits y [NOTICE](NOTICE)
- No se transfieren derechos de marca al contribuir
## Contribuir
Ver [CONTRIBUTING.md](CONTRIBUTING.md) y [CLA.md](CLA.md). Implementa un trait, envía una PR:
- Guía de flujo de trabajo CI: [docs/ci-map.md](docs/ci-map.md)
- Nuevo `Provider``src/providers/`
- Nuevo `Channel``src/channels/`
- Nuevo `Observer``src/observability/`
- Nuevo `Tool``src/tools/`
- Nueva `Memory``src/memory/`
- Nuevo `Tunnel``src/tunnel/`
- Nueva `Skill``~/.zeroclaw/workspace/skills/<n>/`
---
**ZeroClaw** — Cero sobrecarga. Cero compromiso. Despliega en cualquier lugar. Intercambia cualquier cosa. 🦀
## Historial de Estrellas
<p align="center">
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
<img alt="Gráfico de Historial de Estrellas" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
</picture>
</a>
</p>
+179
View File
@@ -0,0 +1,179 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>Noll overhead. Noll kompromissi. 100% Rust. 100% Agnostinen.</strong><br>
⚡️ <strong>Ajaa $10 laitteistolla <5MB RAM:lla: Tämä on 99% vähemmän muistia kuin OpenClaw ja 98% halvempi kuin Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center">
🌐 <strong>Kielet:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
## Mikä on ZeroClaw?
ZeroClaw on kevyt, muokattava ja laajennettava AI-assistentti-infrastruktuuri, joka on rakennettu Rustilla. Se yhdistää eri LLM-palveluntarjoajat (Anthropic, OpenAI, Google, Ollama jne.) yhtenäisen käyttöliittymän kautta ja tukee useita kanavia (Telegram, Matrix, CLI jne.).
### Keskeiset Ominaisuudet
- **🦀 Kirjoitettu Rustilla**: Korkea suorituskyky, muistiturvallisuus ja nollakustannus-abstraktiot
- **🔌 Palveluntarjoaja-agnostinen**: Tukee OpenAI, Anthropic, Google Gemini, Ollama ja muita
- **📱 Monikanavainen**: Telegram, Matrix (E2EE:llä), CLI ja muut
- **🧠 Pluggaava muisti**: SQLite ja Markdown-backendit
- **🛠️ Laajennettavat työkalut**: Lisää mukautettuja työkaluja helposti
- **🔒 Turvallisuus edellä**: Käänteinen proxy, yksityisyys-edellä-suunnittelu
---
## Pika-aloitus
### Vaatimukset
- Rust 1.70+
- LLM-palveluntarjoajan API-avain (Anthropic, OpenAI jne.)
### Asennus
```bash
# Kloonaa repository
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# Rakenna
cargo build --release
# Aja
cargo run --release
```
### Dockerilla
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## Konfiguraatio
ZeroClaw käyttää YAML-konfiguraatiotiedostoa. Oletuksena se etsii `config.yaml`.
```yaml
# Oletuspalveluntarjoaja
provider: anthropic
# Palveluntarjoajien konfiguraatio
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# Muistin konfiguraatio
memory:
backend: sqlite
path: data/memory.db
# Kanavien konfiguraatio
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## Dokumentaatio
Yksityiskohtaista dokumentaatiota varten katso:
- [Dokumentaatiokeskus](docs/README.md)
- [Komentojen Viite](docs/commands-reference.md)
- [Palveluntarjoajien Viite](docs/providers-reference.md)
- [Kanavien Viite](docs/channels-reference.md)
- [Konfiguraation Viite](docs/config-reference.md)
---
## Osallistuminen
Osallistumiset ovat tervetulleita! Lue [Osallistumisopas](CONTRIBUTING.md).
---
## Lisenssi
Tämä projekti on kaksoislisensoitu:
- MIT License
- Apache License, versio 2.0
Katso [LICENSE-APACHE](LICENSE-APACHE) ja [LICENSE-MIT](LICENSE-MIT) yksityiskohdille.
---
## Yhteisö
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## Sponsorit
Jos ZeroClaw on hyödyllinen sinulle, harkitse kahvin ostamista meille:
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)
+78 -50
View File
@@ -1,5 +1,5 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
@@ -14,9 +14,6 @@
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributeurs" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Offrez-moi un café" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X : @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu : Officiel" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram : @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit : r/zeroclawlabs" /></a>
</p>
@@ -25,12 +22,43 @@ Construit par des étudiants et membres des communautés Harvard, MIT et Sundai.
</p>
<p align="center">
🌐 <strong>Langues :</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> · <a href="README.fr.md">Français</a> · <a href="README.vi.md">Tiếng Việt</a>
🌐 <strong>Langues :</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
<p align="center">
<a href="#démarrage-rapide">Démarrage</a> |
<a href="bootstrap.sh">Configuration en un clic</a> |
<a href="https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh">Configuration en un clic</a> |
<a href="docs/README.md">Hub Documentation</a> |
<a href="docs/SUMMARY.md">Table des matières Documentation</a>
</p>
@@ -38,8 +66,8 @@ Construit par des étudiants et membres des communautés Harvard, MIT et Sundai.
<p align="center">
<strong>Accès rapides :</strong>
<a href="docs/reference/README.md">Référence</a> ·
<a href="docs/operations/README.md">Opérations</a> ·
<a href="docs/troubleshooting.md">Dépannage</a> ·
<a href="docs/ops/README.md">Opérations</a> ·
<a href="docs/ops/troubleshooting.md">Dépannage</a> ·
<a href="docs/security/README.md">Sécurité</a> ·
<a href="docs/hardware/README.md">Matériel</a> ·
<a href="docs/contributing/README.md">Contribuer</a>
@@ -63,7 +91,7 @@ Utilisez ce tableau pour les avis importants (changements incompatibles, avis de
| Date (UTC) | Niveau | Avis | Action |
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2026-02-19 | _Critique_ | Nous ne sommes **pas affiliés** à `openagen/zeroclaw` ou `zeroclaw.org`. Le domaine `zeroclaw.org` pointe actuellement vers le fork `openagen/zeroclaw`, et ce domaine/dépôt usurpe l'identité de notre site web/projet officiel. | Ne faites pas confiance aux informations, binaires, levées de fonds ou annonces provenant de ces sources. Utilisez uniquement [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) et nos comptes sociaux vérifiés. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groupe)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), et [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pour les mises à jour officielles. |
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclaw), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
| 2026-02-19 | _Important_ | Anthropic a mis à jour les conditions d'utilisation de l'authentification et des identifiants le 2026-02-19. L'authentification OAuth (Free, Pro, Max) est exclusivement destinée à Claude Code et Claude.ai ; l'utilisation de tokens OAuth de Claude Free/Pro/Max dans tout autre produit, outil ou service (y compris Agent SDK) n'est pas autorisée et peut violer les Conditions d'utilisation grand public. | Veuillez temporairement éviter les intégrations OAuth de Claude Code pour prévenir toute perte potentielle. Clause originale : [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
### ✨ Fonctionnalités
@@ -95,7 +123,7 @@ Benchmark rapide sur machine locale (macOS arm64, fév. 2026) normalisé pour ma
> Notes : Les résultats ZeroClaw sont mesurés sur des builds de production utilisant `/usr/bin/time -l`. OpenClaw nécessite le runtime Node.js (typiquement ~390 Mo de surcharge mémoire supplémentaire), tandis que NanoBot nécessite le runtime Python. PicoClaw et ZeroClaw sont des binaires statiques. Les chiffres RAM ci-dessus sont la mémoire runtime ; les exigences de compilation build-time sont plus élevées.
<p align="center">
<img src="zero-claw.jpeg" alt="Comparaison ZeroClaw vs OpenClaw" width="800" />
<img src="docs/assets/zeroclaw-comparison.jpeg" alt="Comparaison ZeroClaw vs OpenClaw" width="800" />
</p>
### Mesure locale reproductible
@@ -188,10 +216,10 @@ Exemple d'échantillon (macOS arm64, mesuré le 18 février 2026) :
### Option 1 : Configuration automatisée (recommandée)
Le script `bootstrap.sh` installe Rust, clone ZeroClaw, le compile, et configure votre environnement de développement initial :
Le script `install.sh` installe Rust, clone ZeroClaw, le compile, et configure votre environnement de développement initial :
```bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh | bash
```
Ceci va :
@@ -247,9 +275,9 @@ Une fois installé (via bootstrap ou manuellement), vous devriez voir :
**Prochaines étapes :**
1. Configurez vos fournisseurs d'IA dans `~/.zeroclaw/workspace/config.toml`
2. Consultez la [référence de configuration](docs/config-reference.md) pour les options avancées
2. Consultez la [référence de configuration](docs/reference/api/config-reference.md) pour les options avancées
3. Lancez l'agent : `zeroclaw agent start`
4. Testez via votre canal préféré (voir [référence des canaux](docs/channels-reference.md))
4. Testez via votre canal préféré (voir [référence des canaux](docs/reference/api/channels-reference.md))
## Configuration
@@ -285,10 +313,10 @@ kind = "native" # ou "docker" (nécessite Docker)
**Documents de référence complets :**
- [Référence de Configuration](docs/config-reference.md) — tous les paramètres, validations, valeurs par défaut
- [Référence des Fournisseurs](docs/providers-reference.md) — configurations spécifiques aux fournisseurs d'IA
- [Référence des Canaux](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord et plus
- [Opérations](docs/operations-runbook.md) — surveillance en production, rotation des secrets, mise à l'échelle
- [Référence de Configuration](docs/reference/api/config-reference.md) — tous les paramètres, validations, valeurs par défaut
- [Référence des Fournisseurs](docs/reference/api/providers-reference.md) — configurations spécifiques aux fournisseurs d'IA
- [Référence des Canaux](docs/reference/api/channels-reference.md) — Telegram, Matrix, Slack, Discord et plus
- [Opérations](docs/ops/operations-runbook.md) — surveillance en production, rotation des secrets, mise à l'échelle
### Support Runtime (actuel)
@@ -297,7 +325,7 @@ ZeroClaw prend en charge deux backends d'exécution de code :
- **`native`** (par défaut) — exécution de processus directe, chemin le plus rapide, idéal pour les environnements de confiance
- **`docker`** — isolation complète du conteneur, politiques de sécurité renforcées, nécessite Docker
Utilisez `runtime.kind = "docker"` si vous avez besoin d'un sandboxing strict ou de l'isolation réseau. Voir [référence de configuration](docs/config-reference.md#runtime) pour les détails complets.
Utilisez `runtime.kind = "docker"` si vous avez besoin d'un sandboxing strict ou de l'isolation réseau. Voir [référence de configuration](docs/reference/api/config-reference.md#runtime) pour les détails complets.
## Commandes
@@ -331,7 +359,7 @@ zeroclaw doctor # Exécute les vérifications de santé du système
zeroclaw version # Affiche la version et les informations de build
```
Voir [Référence des Commandes](docs/commands-reference.md) pour les options et exemples complets.
Voir [Référence des Commandes](docs/reference/cli/commands-reference.md) pour les options et exemples complets.
## Architecture
@@ -378,7 +406,7 @@ Voir [Référence des Commandes](docs/commands-reference.md) pour les options et
- Le runtime abstrait l'exécution de code (natif ou Docker)
- Aucun verrouillage de fournisseur — échangez Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama sans changement de code
Voir [documentation architecture](docs/architecture.svg) pour les diagrammes détaillés et les détails d'implémentation.
Voir [documentation architecture](docs/assets/architecture.svg) pour les diagrammes détaillés et les détails d'implémentation.
## Exemples
@@ -412,7 +440,7 @@ device_name = "zeroclaw-prod"
e2ee_enabled = true
```
Invitez `@zeroclaw:matrix.org` dans une salle chiffrée, et le bot répondra avec le chiffrement complet. Voir [Guide Matrix E2EE](docs/matrix-e2ee-guide.md) pour la configuration de vérification de dispositif.
Invitez `@zeroclaw:matrix.org` dans une salle chiffrée, et le bot répondra avec le chiffrement complet. Voir [Guide Matrix E2EE](docs/security/matrix-e2ee-guide.md) pour la configuration de vérification de dispositif.
### Multi-Fournisseur
@@ -451,7 +479,7 @@ kind = "markdown"
path = "~/.zeroclaw/workspace/memory/"
```
Voir [Référence de Configuration](docs/config-reference.md#memory) pour toutes les options mémoire.
Voir [Référence de Configuration](docs/reference/api/config-reference.md#memory) pour toutes les options mémoire.
## Support de Fournisseur
@@ -480,7 +508,7 @@ model = "your-model-name"
Exemple : utilisez [LiteLLM](https://github.com/BerriAI/litellm) comme proxy pour accéder à n'importe quel LLM via l'interface OpenAI.
Voir [Référence des Fournisseurs](docs/providers-reference.md) pour les détails de configuration complets.
Voir [Référence des Fournisseurs](docs/reference/api/providers-reference.md) pour les détails de configuration complets.
## Support de Canal
@@ -494,7 +522,7 @@ Voir [Référence des Fournisseurs](docs/providers-reference.md) pour les détai
| **CLI** | ✅ Stable | Aucun | Interface conversationnelle directe |
| **Web** | 🚧 Planifié | Clé API ou OAuth | Interface de chat basée navigateur |
Voir [Référence des Canaux](docs/channels-reference.md) pour les instructions de configuration complètes.
Voir [Référence des Canaux](docs/reference/api/channels-reference.md) pour les instructions de configuration complètes.
## Support d'Outil
@@ -522,7 +550,7 @@ kind = "docker"
allowed_tools = ["bash", "python", "filesystem_read"] # Liste d'autorisation explicite
```
Voir [Référence de Configuration](docs/config-reference.md#runtime) pour les options de sécurité complètes.
Voir [Référence de Configuration](docs/reference/api/config-reference.md#runtime) pour les options de sécurité complètes.
## Déploiement
@@ -557,7 +585,7 @@ sudo systemctl status zeroclaw-daemon
sudo systemctl status zeroclaw-agent
```
Voir [Guide de Déploiement Réseau](docs/network-deployment.md) pour les instructions de déploiement en production complètes.
Voir [Guide de Déploiement Réseau](docs/ops/network-deployment.md) pour les instructions de déploiement en production complètes.
### Docker
@@ -600,7 +628,7 @@ Fournisseurs de tunnel supportés :
- **Ngrok** — configuration rapide, domaines personnalisés (plan payant)
- **Tailscale** — réseau maillé privé, pas de port public
Voir [Référence de Configuration](docs/config-reference.md#tunnel) pour les options de configuration complètes.
Voir [Référence de Configuration](docs/reference/api/config-reference.md#tunnel) pour les options de configuration complètes.
## Sécurité
@@ -659,7 +687,7 @@ max_size_mb = 100 # Pour rotation basée sur la taille
retention_days = 30 # Purge automatique après N jours
```
Voir [Référence de Configuration](docs/config-reference.md#logging) pour toutes les options de journalisation.
Voir [Référence de Configuration](docs/reference/api/config-reference.md#logging) pour toutes les options de journalisation.
### Métriques (Planifié)
@@ -776,32 +804,32 @@ Commencez par le hub de documentation pour une carte basée sur les tâches :
- Hub de documentation : [`docs/README.md`](docs/README.md)
- Table des matières unifiée docs : [`docs/SUMMARY.md`](docs/SUMMARY.md)
- Référence des commandes : [`docs/commands-reference.md`](docs/commands-reference.md)
- Référence de configuration : [`docs/config-reference.md`](docs/config-reference.md)
- Référence des fournisseurs : [`docs/providers-reference.md`](docs/providers-reference.md)
- Référence des canaux : [`docs/channels-reference.md`](docs/channels-reference.md)
- Runbook des opérations : [`docs/operations-runbook.md`](docs/operations-runbook.md)
- Dépannage : [`docs/troubleshooting.md`](docs/troubleshooting.md)
- Inventaire/classification docs : [`docs/docs-inventory.md`](docs/docs-inventory.md)
- Instantané triage PR/Issue (au 18 février 2026) : [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
- Référence des commandes : [`docs/reference/cli/commands-reference.md`](docs/reference/cli/commands-reference.md)
- Référence de configuration : [`docs/reference/api/config-reference.md`](docs/reference/api/config-reference.md)
- Référence des fournisseurs : [`docs/reference/api/providers-reference.md`](docs/reference/api/providers-reference.md)
- Référence des canaux : [`docs/reference/api/channels-reference.md`](docs/reference/api/channels-reference.md)
- Runbook des opérations : [`docs/ops/operations-runbook.md`](docs/ops/operations-runbook.md)
- Dépannage : [`docs/ops/troubleshooting.md`](docs/ops/troubleshooting.md)
- Inventaire/classification docs : [`docs/maintainers/docs-inventory.md`](docs/maintainers/docs-inventory.md)
- Instantané triage PR/Issue (au 18 février 2026) : [`docs/maintainers/project-triage-snapshot-2026-02-18.md`](docs/maintainers/project-triage-snapshot-2026-02-18.md)
Références de collaboration principales :
- Hub de documentation : [docs/README.md](docs/README.md)
- Modèle de documentation : [docs/doc-template.md](docs/doc-template.md)
- Modèle de documentation : [docs/contributing/doc-template.md](docs/contributing/doc-template.md)
- Checklist de modification de documentation : [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
- Référence de configuration des canaux : [docs/channels-reference.md](docs/channels-reference.md)
- Opérations de salles chiffrées Matrix : [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
- Référence de configuration des canaux : [docs/reference/api/channels-reference.md](docs/reference/api/channels-reference.md)
- Opérations de salles chiffrées Matrix : [docs/security/matrix-e2ee-guide.md](docs/security/matrix-e2ee-guide.md)
- Guide de contribution : [CONTRIBUTING.md](CONTRIBUTING.md)
- Politique de workflow PR : [docs/pr-workflow.md](docs/pr-workflow.md)
- Playbook du relecteur (triage + revue approfondie) : [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
- Carte de propriété et triage CI : [docs/ci-map.md](docs/ci-map.md)
- Politique de workflow PR : [docs/contributing/pr-workflow.md](docs/contributing/pr-workflow.md)
- Playbook du relecteur (triage + revue approfondie) : [docs/contributing/reviewer-playbook.md](docs/contributing/reviewer-playbook.md)
- Carte de propriété et triage CI : [docs/contributing/ci-map.md](docs/contributing/ci-map.md)
- Politique de divulgation de sécurité : [SECURITY.md](SECURITY.md)
Pour le déploiement et les opérations runtime :
- Guide de déploiement réseau : [docs/network-deployment.md](docs/network-deployment.md)
- Playbook d'agent proxy : [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
- Guide de déploiement réseau : [docs/ops/network-deployment.md](docs/ops/network-deployment.md)
- Playbook d'agent proxy : [docs/ops/proxy-agent-playbook.md](docs/ops/proxy-agent-playbook.md)
## Soutenir ZeroClaw
@@ -826,7 +854,7 @@ Nous construisons en open source parce que les meilleures idées viennent de par
> <https://github.com/zeroclaw-labs/zeroclaw>
Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](TRADEMARK.md).
Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](docs/maintainers/trademark.md).
Si vous rencontrez une usurpation d'identité ou une utilisation abusive de marque, veuillez [ouvrir une issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
@@ -841,11 +869,11 @@ ZeroClaw est sous double licence pour une ouverture maximale et la protection de
| [MIT](LICENSE-MIT) | Open-source, recherche, académique, usage personnel |
| [Apache 2.0](LICENSE-APACHE) | Protection de brevet, institutionnel, déploiement commercial |
Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](CLA.md) pour l'accord de contributeur complet.
Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](docs/contributing/cla.md) pour l'accord de contributeur complet.
### Marque
Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](TRADEMARK.md) pour les utilisations permises et interdites.
Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](docs/maintainers/trademark.md) pour les utilisations permises et interdites.
### Protections des Contributeurs
@@ -856,9 +884,9 @@ Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cet
## Contribuer
Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](CLA.md). Implémentez un trait, soumettez une PR :
Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](docs/contributing/cla.md). Implémentez un trait, soumettez une PR :
- Guide de workflow CI : [docs/ci-map.md](docs/ci-map.md)
- Guide de workflow CI : [docs/contributing/ci-map.md](docs/contributing/ci-map.md)
- Nouveau `Provider``src/providers/`
- Nouveau `Channel``src/channels/`
- Nouveau `Observer``src/observability/`
+197
View File
@@ -0,0 +1,197 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center" dir="rtl">
<strong>תקורת אפס. אין פשרות. 100% Rust. 100% אגנוסטי.</strong><br>
⚡️ <strong>פועל על חומרה ב-$10 עם <5MB זיכרון: זה 99% פחות זיכרון מ-OpenClaw ו-98% זול יותר מ-Mac mini!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center" dir="rtl">
🌐 <strong>שפות:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
## מה זה ZeroClaw?
<p align="center" dir="rtl">
ZeroClaw הוא תשתית עוזר AI קלת משקל, מוטטבילית וניתנת להרחבה שנבנתה ב-Rust. היא מחברת ספקי LLM שונים (Anthropic, OpenAI, Google, Ollama, וכו') דרך ממשק מאוחד ותומכת בערוצים מרובים (Telegram, Matrix, CLI, וכו').
</p>
### תכונות עיקריות
<p align="center" dir="rtl">
- **🦀 נכתב ב-Rust**: ביצועים גבוהים, אבטחת זיכרון, ואבסטרקציות ללא עלות
- **🔌 אגנוסטי לספקים**: תמיכה ב-OpenAI, Anthropic, Google Gemini, Ollama, ואחרים
- **📱 ערוצים מרובים**: Telegram, Matrix (עם E2EE), CLI, ואחרים
- **🧠 זיכרון ניתן להחלפה**: Backend של SQLite ו-Markdown
- **🛠️ כלים ניתנים להרחבה**: הוסף כלים מותאמים אישית בקלות
- **🔒 אבטחה תחילה**: פרוקסי הפוך, עיצוב מותחל על פרטיות
</p>
---
## התחלה מהירה
### דרישות מוקדמות
<p align="center" dir="rtl">
- Rust 1.70+
- מפתח API של ספק LLM (Anthropic, OpenAI, וכו')
</p>
### התקנה
```bash
# שכפל את המאגר
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# בנה
cargo build --release
# הפעל
cargo run --release
```
### עם Docker
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## קונפיגורציה
<p align="center" dir="rtl">
ZeroClaw משתמש בקובץ קונפיגורציה YAML. כברירת מחדל, הוא מחפש `config.yaml`.
</p>
```yaml
# ספק ברירת מחדל
provider: anthropic
# קונפיגורציית ספקים
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# קונפיגורציית זיכרון
memory:
backend: sqlite
path: data/memory.db
# קונפיגורציית ערוצים
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## תיעוד
<p align="center" dir="rtl">
לתיעוד מפורט, ראה:
</p>
- [מרכז התיעוד](docs/README.md)
- [הפניה לפקודות](docs/commands-reference.md)
- [הפניה לספקים](docs/providers-reference.md)
- [הפניה לערוצים](docs/channels-reference.md)
- [הפניה לקונפיגורציה](docs/config-reference.md)
---
## תרומות
<p align="center" dir="rtl">
תרומות מוזמנות! אנא קרא את [מדריך התרומות](CONTRIBUTING.md).
</p>
---
## רישיון
<p align="center" dir="rtl">
פרויקט זה מורשה ברישיון כפול:
</p>
- MIT License
- Apache License, גרסה 2.0
<p align="center" dir="rtl">
ראה [LICENSE-APACHE](LICENSE-APACHE) ו-[LICENSE-MIT](LICENSE-MIT) לפרטים.
</p>
---
## קהילה
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## נותני חסות
<p align="center" dir="rtl">
אם ZeroClaw שימושי עבורך, אנא שקול לקנות לנו קפה:
</p>
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)
+179
View File
@@ -0,0 +1,179 @@
<p align="center">
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
</p>
<h1 align="center">ZeroClaw 🦀</h1>
<p align="center">
<strong>शून्य ओवरहेड। शून्य समझौता। 100% रस्ट। 100% अज्ञेयवादी।</strong><br>
⚡️ <strong>$10 हार्डवेयर पर <5MB RAM के साथ चलता है: यह OpenClaw से 99% कम मेमोरी और Mac mini से 98% सस्ता है!</strong>
</p>
<p align="center">
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
</p>
<p align="center">
🌐 <strong>भाषाएँ:</strong>
<a href="README.md">🇺🇸 English</a> ·
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
<a href="README.ja.md">🇯🇵 日本語</a> ·
<a href="README.ko.md">🇰🇷 한국어</a> ·
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
<a href="README.es.md">🇪🇸 Español</a> ·
<a href="README.pt.md">🇧🇷 Português</a> ·
<a href="README.it.md">🇮🇹 Italiano</a> ·
<a href="README.de.md">🇩🇪 Deutsch</a> ·
<a href="README.fr.md">🇫🇷 Français</a> ·
<a href="README.ar.md">🇸🇦 العربية</a> ·
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
<a href="README.ru.md">🇷🇺 Русский</a> ·
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
<a href="README.he.md">🇮🇱 עברית</a> ·
<a href="README.pl.md">🇵🇱 Polski</a> ·
<a href="README.cs.md">🇨🇿 Čeština</a> ·
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
<a href="README.uk.md">🇺🇦 Українська</a> ·
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
<a href="README.th.md">🇹🇭 ไทย</a> ·
<a href="README.ur.md">🇵🇰 اردو</a> ·
<a href="README.ro.md">🇷🇴 Română</a> ·
<a href="README.sv.md">🇸🇪 Svenska</a> ·
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
<a href="README.hu.md">🇭🇺 Magyar</a> ·
<a href="README.fi.md">🇫🇮 Suomi</a> ·
<a href="README.da.md">🇩🇰 Dansk</a> ·
<a href="README.nb.md">🇳🇴 Norsk</a>
</p>
---
## ZeroClaw क्या है?
ZeroClaw एक हल्का, म्यूटेबल और एक्स्टेंसिबल AI असिस्टेंट इन्फ्रास्ट्रक्चर है जो रस्ट में बनाया गया है। यह विभिन्न LLM प्रदाताओं (Anthropic, OpenAI, Google, Ollama, आदि) को एक एकीकृत इंटरफेस के माध्यम से कनेक्ट करता है और कई चैनलों (Telegram, Matrix, CLI, आदि) का समर्थन करता है।
### मुख्य विशेषताएं
- **🦀 रस्ट में लिखा गया**: उच्च प्रदर्शन, मेमोरी सुरक्षा, और शून्य-लागत एब्सट्रैक्शन
- **🔌 प्रदाता-अज्ञेयवादी**: OpenAI, Anthropic, Google Gemini, Ollama, और अन्य का समर्थन
- **📱 बहु-चैनल**: Telegram, Matrix (E2EE के साथ), CLI, और अन्य
- **🧠 प्लगेबल मेमोरी**: SQLite और Markdown बैकएंड
- **🛠️ विस्तार योग्य टूल**: आसानी से कस्टम टूल जोड़ें
- **🔒 सुरक्षा-पहले**: रिवर्स-प्रॉक्सी, गोपनीयता-पहले डिज़ाइन
---
## त्वरित शुरुआत
### आवश्यकताएं
- रस्ट 1.70+
- एक LLM प्रदाता API कुंजी (Anthropic, OpenAI, आदि)
### इंस्टॉलेशन
```bash
# रिपॉजिटरी क्लोन करें
git clone https://github.com/zeroclaw-labs/zeroclaw.git
cd zeroclaw
# बिल्ड करें
cargo build --release
# चलाएं
cargo run --release
```
### Docker के साथ
```bash
docker run -d \
--name zeroclaw \
-e ANTHROPIC_API_KEY=your_key \
-v zeroclaw-data:/app/data \
zeroclaw/zeroclaw:latest
```
---
## कॉन्फ़िगरेशन
ZeroClaw एक YAML कॉन्फ़िगरेशन फ़ाइल का उपयोग करता है। डिफ़ॉल्ट रूप से, यह `config.yaml` देखता है।
```yaml
# डिफ़ॉल्ट प्रदाता
provider: anthropic
# प्रदाता कॉन्फ़िगरेशन
providers:
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-3-5-sonnet-20241022
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
# मेमोरी कॉन्फ़िगरेशन
memory:
backend: sqlite
path: data/memory.db
# चैनल कॉन्फ़िगरेशन
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
```
---
## दस्तावेज़ीकरण
विस्तृत दस्तावेज़ीकरण के लिए, देखें:
- [दस्तावेज़ीकरण हब](docs/README.md)
- [कमांड संदर्भ](docs/commands-reference.md)
- [प्रदाता संदर्भ](docs/providers-reference.md)
- [चैनल संदर्भ](docs/channels-reference.md)
- [कॉन्फ़िगरेशन संदर्भ](docs/config-reference.md)
---
## योगदान
योगदान का स्वागत है! कृपया [योगदान गाइड](CONTRIBUTING.md) पढ़ें।
---
## लाइसेंस
यह प्रोजेक्ट दोहरे लाइसेंस प्राप्त है:
- MIT लाइसेंस
- Apache लाइसेंस, संस्करण 2.0
विवरण के लिए [LICENSE-APACHE](LICENSE-APACHE) और [LICENSE-MIT](LICENSE-MIT) देखें।
---
## समुदाय
- [Telegram](https://t.me/zeroclawlabs)
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
---
## प्रायोजक
यदि ZeroClaw आपके लिए उपयोगी है, तो कृपया हमें एक कॉफी खरीदने पर विचार करें:
[![Buy Me a Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee)](https://buymeacoffee.com/argenistherose)

Some files were not shown because too many files have changed in this diff Show More