Compare commits
314 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 6bd363f139 | |||
| 7170810e98 | |||
| 816fa87d60 | |||
| dcffa4d7fb | |||
| e03dc4bfce | |||
| 2cb57e6b2d | |||
| 079e972c81 | |||
| 448682c440 | |||
| 3fe3fe23b1 | |||
| 9e9052634d | |||
| b227b1abc3 | |||
| fb9501afd5 | |||
| 9df0a76f4a | |||
| 9501555448 | |||
| 138ada0fd6 | |||
| 3c2c5aa78c | |||
| 273bd00d08 | |||
| 546873e2bb | |||
| bf7f568eef | |||
| d33b73974b | |||
| a99ec631aa | |||
| ab6846cb9f | |||
| ca683816a7 | |||
| 59014641bf | |||
| 8d7abb73e7 | |||
| 7ddf10f86d | |||
| 6352f024b2 | |||
| ef2f8e9808 | |||
| 6f482051ec | |||
| 8bed9c5485 | |||
| 6861664258 | |||
| 5cf1c77531 | |||
| 483d3c0853 | |||
| f5cd6baec3 | |||
| dfe0221e49 | |||
| 12cfe48047 | |||
| 0479bfca36 | |||
| 67e581d8ae | |||
| 195c7ba919 | |||
| bb66df5276 | |||
| d47f6703d8 | |||
| 74fe29d772 | |||
| 86ca34ac1f | |||
| f0f0f80895 | |||
| 01bcba83ad | |||
| 4da85cee6e | |||
| bdc0f325bf | |||
| 248348bd80 | |||
| bd70c0f45b | |||
| d0edcec1f9 | |||
| 9423b9c94e | |||
| 95da0062de | |||
| 7634d8d1f8 | |||
| 744620bc34 | |||
| f67abd9fc8 | |||
| 94a4d72e71 | |||
| 39353748fa | |||
| 71f013af49 | |||
| 74f411bd2b | |||
| 50a65ea0e5 | |||
| 655e5fd56a | |||
| 072a10eff4 | |||
| 9baa02a40b | |||
| 0af0f0344e | |||
| 9cfc88c38c | |||
| 87127e2a08 | |||
| 7cacdef2d1 | |||
| 9469b5bdbe | |||
| ecbe5e2c68 | |||
| 86a2fc2594 | |||
| 1c2a49459e | |||
| e1f37a307e | |||
| fd40eeb408 | |||
| 0ee3b6d617 | |||
| d0781f31a5 | |||
| a3bc7a04dd | |||
| 5901f70dc0 | |||
| 026158557c | |||
| 162a2b65a5 | |||
| e6bfb1ebee | |||
| 9a6de3b204 | |||
| f894bc4140 | |||
| a1c65558d8 | |||
| ed7c191fcd | |||
| 69abd217d7 | |||
| f2035819c2 | |||
| 46d68fc8ba | |||
| 06926a189d | |||
| 8f68afb70c | |||
| 6016491985 | |||
| bf11b7b1a0 | |||
| 436619b015 | |||
| c6233b066c | |||
| b33af6894e | |||
| f98a18502f | |||
| 14de5e527e | |||
| 7b7e08dc21 | |||
| 526d63fd75 | |||
| e20e0f7cb0 | |||
| e3cc40c64c | |||
| 9d182c6dd8 | |||
| d7d114eae7 | |||
| b1dfa192b8 | |||
| 7ed1bdc104 | |||
| 7f6bd651f7 | |||
| 491ac601e6 | |||
| b4da085f0c | |||
| 0829bb92df | |||
| 7950f51bb9 | |||
| 79cff39aa5 | |||
| e94dafbbfb | |||
| 99460c3fff | |||
| 8e3775362e | |||
| 1ef1b5d02d | |||
| e2a3907d35 | |||
| 980ad7ebc9 | |||
| da0a592163 | |||
| a6cd877ac8 | |||
| 138ff4249c | |||
| ff2da533a5 | |||
| 86ca1ba2b6 | |||
| f16a71826c | |||
| 0c5b41b288 | |||
| f127ca8c93 | |||
| 2e86d83ed7 | |||
| 9b9b85217c | |||
| 0e006b91e4 | |||
| d93d7dab84 | |||
| ee7048502c | |||
| 2c203571d0 | |||
| 9781f07a98 | |||
| 4600469717 | |||
| b2857cb836 | |||
| 76d54193f1 | |||
| 046040d535 | |||
| 283385624e | |||
| f73f39d33d | |||
| d07d314fd0 | |||
| f7b057a743 | |||
| 276e3f67ca | |||
| 0a9acf8150 | |||
| 535995f2ab | |||
| 863d731b92 | |||
| f19d4951b6 | |||
| 7399894cc8 | |||
| 6e7c13a864 | |||
| 7ef9d8a7b5 | |||
| 8d48472c9e | |||
| d1fffc3b74 | |||
| 8538c4105a | |||
| 08d6959e0d | |||
| ad8e1e65e0 | |||
| c204d72cc4 | |||
| 852b67fff3 | |||
| df4e11bd1d | |||
| 991a1ea9cc | |||
| 97ebfe0549 | |||
| ea61491f2a | |||
| 1a78dcf447 | |||
| 620e4adee5 | |||
| 631e7f61b5 | |||
| 36bc6c4aee | |||
| 67b0ff0ee9 | |||
| c99a046b1f | |||
| d17e9586ae | |||
| 9f2655dc4d | |||
| 0b9a975da5 | |||
| 507c93fcf5 | |||
| a6e5a6ffda | |||
| cf21cf093a | |||
| b454a9d301 | |||
| c8df83dd17 | |||
| 225edecc3d | |||
| 465b2ea6af | |||
| e70ca52dc4 | |||
| 45d2368730 | |||
| 026f2609f9 | |||
| a6cf7d4015 | |||
| 48c7414bb3 | |||
| f7fefd4b6c | |||
| 2d359c1f74 | |||
| 8439b40145 | |||
| 57065b07a3 | |||
| a7e295c966 | |||
| d4caba0967 | |||
| 8c0375a9ba | |||
| fa2faf408d | |||
| a6102f8dd6 | |||
| b4d619dd2b | |||
| e098c6d242 | |||
| 5dfe3372f5 | |||
| ce6349741b | |||
| e3612880f3 | |||
| 73b862bb1f | |||
| 31c027ed6d | |||
| 571091ecef | |||
| 44ac470d78 | |||
| 92e0f7aefd | |||
| db536935bf | |||
| d64be99621 | |||
| f981d9ea69 | |||
| 8230b26171 | |||
| 4f62fb2ecb | |||
| fee163fc74 | |||
| 32e8dbbec5 | |||
| 6eef5bafcb | |||
| 53c1a3ecea | |||
| db917bc37b | |||
| 47ea46e694 | |||
| 9923544769 | |||
| c3cf915b94 | |||
| 1be22c7e84 | |||
| 07fdea528d | |||
| a787cbcc70 | |||
| 7305f6df59 | |||
| 46c98ffa26 | |||
| a3d5631757 | |||
| c9d76780f0 | |||
| 16c509f939 | |||
| 306cf16dc5 | |||
| 658f54f41b | |||
| 040ef56457 | |||
| f56dd6f8ea | |||
| d4849d333b | |||
| bfb7320c49 | |||
| bfd56f2ba9 | |||
| 921132575d | |||
| 05e88f81ea | |||
| 12faff3aa9 | |||
| 668d8fb1fa | |||
| 52753cb05a | |||
| 3493afc068 | |||
| 63d9020d6a | |||
| 9d681dc13b | |||
| 1b90a23eed | |||
| 7fbf65304b | |||
| 7e01f5d7fd | |||
| 3d936a31b5 | |||
| 64d13c236e | |||
| 0fc812f7db | |||
| e76d3e6312 | |||
| 987f8888b3 | |||
| 7310ba67c5 | |||
| 34baae91ff | |||
| 5fc8b673d8 | |||
| a22dc39ef6 | |||
| 9ecf9739ed | |||
| c7967a1055 | |||
| 61050eace9 | |||
| 3f9f9c33bc | |||
| f2abf9ac2f | |||
| 52b05a7c34 | |||
| 6ae134dd3c | |||
| 389ecf0499 | |||
| 0910b394b8 | |||
| c43aaa10f3 | |||
| 96700d7952 | |||
| d78e3e253e | |||
| 3b2009f15a | |||
| e748e55feb | |||
| 2efe98da79 | |||
| b9b97eeaef | |||
| df54237a73 | |||
| fdd7daae6d | |||
| baa01dab66 | |||
| 77a3b39ff7 | |||
| 15061f9605 | |||
| f227a8f4d6 | |||
| 35ecaaf435 | |||
| daeee93f89 | |||
| 3e9474309a | |||
| 2fb72438f8 | |||
| ae3f348a15 | |||
| d193cf036f | |||
| 1455f08fbb | |||
| bbcbccf20c | |||
| 2df4e902f6 | |||
| c252ad474a | |||
| 229830ce17 | |||
| 02b1702a48 | |||
| ef47cf14c3 | |||
| 1a0e5547d7 | |||
| 055507bd18 | |||
| 50f537fa6a | |||
| 731545e405 | |||
| f044237cc9 | |||
| 03328617c9 | |||
| 46ef41ac65 | |||
| fc8696b9b8 | |||
| d3c8ff6abe | |||
| 920568625b | |||
| 83e14a27aa | |||
| 79a2d992b0 | |||
| 7c6430126b | |||
| 24720c5dd5 | |||
| f1a1f3fdc7 | |||
| a01a84c8fe | |||
| 8d8f17804a | |||
| 6729d34cf1 | |||
| 1b131b5256 | |||
| aba3a146c1 | |||
| 5e4bbd39a5 | |||
| 9d4c9b1af9 | |||
| 409a74c72b | |||
| 775c05ad94 | |||
| 3554f6afff | |||
| 4a2503605d | |||
| d6283d2bab | |||
| ef8f2fed70 | |||
| ce53dcde46 | |||
| c6eb44438b | |||
| f162eede13 | |||
| 123be02653 | |||
| 742aa0208f |
@@ -0,0 +1,133 @@
|
||||
# Skill: github-issue
|
||||
|
||||
File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
|
||||
|
||||
## When to Use
|
||||
|
||||
Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
|
||||
|
||||
## Instructions
|
||||
|
||||
You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
|
||||
|
||||
### Step 1: Detect Issue Type and Read the Template
|
||||
|
||||
Determine from the user's message whether this is a **bug report** or **feature request**.
|
||||
- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
|
||||
|
||||
Then read the corresponding issue template to understand the required fields:
|
||||
|
||||
- Bug report: `.github/ISSUE_TEMPLATE/bug_report.yml`
|
||||
- Feature request: `.github/ISSUE_TEMPLATE/feature_request.yml`
|
||||
|
||||
Parse the YAML to extract:
|
||||
- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
|
||||
- The `labels` array
|
||||
- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
|
||||
|
||||
This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
|
||||
|
||||
### Step 2: Auto-Gather Context
|
||||
|
||||
Before asking the user anything, silently gather environment and repo context:
|
||||
|
||||
```bash
|
||||
# Git context
|
||||
git log --oneline -5
|
||||
git status --short
|
||||
git diff --stat HEAD~1 2>/dev/null
|
||||
|
||||
# For bug reports — environment detection
|
||||
uname -s -r -m # OS info
|
||||
sw_vers 2>/dev/null # macOS version
|
||||
rustc --version 2>/dev/null # Rust version
|
||||
cargo metadata --format-version=1 --no-deps 2>/dev/null | jq -r '.packages[] | select(.name=="zeroclaw") | .version' 2>/dev/null # ZeroClaw version
|
||||
git rev-parse --short HEAD # commit SHA fallback
|
||||
```
|
||||
|
||||
Also read recently changed files to infer the affected component and architecture impact.
|
||||
|
||||
### Step 3: Pre-Fill and Present the Form
|
||||
|
||||
Using the parsed template fields and gathered context, draft values for ALL fields from the template:
|
||||
|
||||
- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
|
||||
- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
|
||||
- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
|
||||
- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
|
||||
- **markdown** fields: skip these — they're informational headers, not form inputs.
|
||||
- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
|
||||
|
||||
Present the complete draft to the user in a clean readable format:
|
||||
|
||||
```
|
||||
## Issue Draft: [Bug]: <title> / [Feature]: <title>
|
||||
**Labels**: <from template>
|
||||
|
||||
### <Field Label>
|
||||
<proposed value or selection>
|
||||
|
||||
### <Field Label>
|
||||
<proposed value>
|
||||
...
|
||||
```
|
||||
|
||||
Use AskUserQuestion to ask the user to review:
|
||||
- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
|
||||
|
||||
If the user requests changes, update the draft and re-present. Iterate until the user approves.
|
||||
|
||||
### Step 4: Scope Guard
|
||||
|
||||
Before final submission, analyze the collected content for scope creep:
|
||||
- Does the bug report describe multiple independent defects?
|
||||
- Does the feature request bundle unrelated changes?
|
||||
|
||||
If multi-concept issues are detected:
|
||||
1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
|
||||
2. Break down the distinct groups found.
|
||||
3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
|
||||
4. Let the user decide: proceed as-is or split.
|
||||
|
||||
### Step 5: Construct Issue Body
|
||||
|
||||
Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### <Field Label>` sections, so use that exact structure.
|
||||
|
||||
For each non-markdown field from the template, in order:
|
||||
|
||||
```markdown
|
||||
### <attributes.label>
|
||||
|
||||
<value>
|
||||
```
|
||||
|
||||
For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
|
||||
|
||||
For checkbox fields, render each option as:
|
||||
```markdown
|
||||
- [X] <option label text>
|
||||
```
|
||||
|
||||
### Step 6: Final Preview and Submit
|
||||
|
||||
Show the final constructed issue (title + labels + full body) for one last confirmation.
|
||||
|
||||
Then submit using a HEREDOC for the body to preserve formatting:
|
||||
|
||||
```bash
|
||||
gh issue create --title "<title prefix><user title>" --label "<label1>,<label2>" --body "$(cat <<'ISSUE_EOF'
|
||||
<body content>
|
||||
ISSUE_EOF
|
||||
)"
|
||||
```
|
||||
|
||||
Return the resulting issue URL to the user.
|
||||
|
||||
### Important Rules
|
||||
|
||||
- **Always read the template file** — never assume field names, options, or structure. The templates are the source of truth and may change over time.
|
||||
- **Never include personal/sensitive data** in the issue. Redact secrets, tokens, emails, real names.
|
||||
- **Use neutral project-scoped placeholders** per ZeroClaw's privacy contract.
|
||||
- **One concept per issue** — enforce the scope guard.
|
||||
- **Auto-detect, don't guess** — use real command output for environment fields.
|
||||
- **Match GitHub's rendering** — use `### Field Label` sections so issues look consistent whether filed via web UI or this skill.
|
||||
@@ -0,0 +1,209 @@
|
||||
# Skill: github-pr
|
||||
|
||||
Open or update a GitHub Pull Request for ZeroClaw. Handles creating new PRs with a fully filled-out template body, and updating existing PRs (title, body sections, labels, comments). Use this skill whenever the user wants to open a PR, create a pull request, update a PR, edit PR description, add labels to a PR, or sync a PR after new commits — even if they don't say "PR" explicitly (e.g., "submit this for review", "push and open for merge").
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill supports two modes: **Open** (create a new PR) and **Update** (edit an existing PR). Detect the mode from context — if there's already an open PR for the current branch and the user didn't say "open a new PR", default to update mode.
|
||||
|
||||
The PR template at `.github/pull_request_template.md` is the source of truth for the PR body structure. Read it every time — never assume or hardcode section names, fields, or their order. The template may change over time and the skill should always reflect its current state.
|
||||
|
||||
---
|
||||
|
||||
## Shared: Read the PR Template
|
||||
|
||||
Before opening or updating a PR body, read `.github/pull_request_template.md` and parse it to understand:
|
||||
|
||||
- The `## ` section headers (these are the top-level sections of the PR body)
|
||||
- The bullet points, fields, and prompts within each section
|
||||
- Which sections are marked `(required)` vs optional/recommended
|
||||
- Any inline formatting conventions (backtick options, Yes/No fields, etc.)
|
||||
|
||||
This parsed structure drives how you fill, present, and edit the PR body.
|
||||
|
||||
---
|
||||
|
||||
## Mode: Open a New PR
|
||||
|
||||
### Step 1: Gather Context
|
||||
|
||||
Collect information to pre-fill the PR body. Run these in parallel:
|
||||
|
||||
```bash
|
||||
# Branch and commit context
|
||||
git branch --show-current
|
||||
git log master..HEAD --oneline
|
||||
git diff master...HEAD --stat
|
||||
|
||||
# Check if branch is pushed
|
||||
git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null
|
||||
|
||||
# Environment (for validation evidence)
|
||||
rustc --version 2>/dev/null
|
||||
```
|
||||
|
||||
Also review the changed files and commit messages to understand the nature of the change (bug fix, feature, refactor, docs, chore, etc.) and which subsystems are affected.
|
||||
|
||||
### Step 2: Pre-Fill the Template
|
||||
|
||||
Using the parsed template structure and gathered context, draft a complete PR body:
|
||||
|
||||
- For each `## ` section from the template, fill in the bullet points and fields based on context from the commits, diff, and changed files.
|
||||
- Use the field descriptions and placeholder text in the template as guidance for what each field expects.
|
||||
- For Yes/No fields, infer from the diff (e.g., if no files in `src/security/` changed, security impact is likely all No).
|
||||
- For required sections, always provide a substantive answer. For optional sections, fill if there's enough context, otherwise leave the template prompts in place.
|
||||
- Draft a conventional commit-style PR title based on the changes (e.g., `feat(provider): add retry budget override`, `fix(channel): handle disconnect gracefully`, `chore(ci): update workflow targets`).
|
||||
|
||||
### Step 3: Present Draft for Review
|
||||
|
||||
Show the user the complete draft:
|
||||
|
||||
```
|
||||
## PR Draft: <title>
|
||||
**Branch**: <head> -> master
|
||||
**Labels**: <suggested labels>
|
||||
|
||||
<full body with all sections filled>
|
||||
```
|
||||
|
||||
Ask the user to review: "Here's the pre-filled PR. Review and let me know what to change, or say 'submit' to open it."
|
||||
|
||||
Iterate on changes until the user approves.
|
||||
|
||||
### Step 4: Push and Create
|
||||
|
||||
1. If the branch isn't pushed yet, push it:
|
||||
```bash
|
||||
git push -u origin <branch>
|
||||
```
|
||||
|
||||
2. Create the PR using a HEREDOC for the body:
|
||||
```bash
|
||||
gh pr create --title "<title>" --base master --body "$(cat <<'PR_BODY_EOF'
|
||||
<full body>
|
||||
PR_BODY_EOF
|
||||
)"
|
||||
```
|
||||
|
||||
3. If labels were agreed on, add them:
|
||||
```bash
|
||||
gh pr edit <number> --add-label "<label1>,<label2>"
|
||||
```
|
||||
|
||||
4. Return the PR URL to the user.
|
||||
|
||||
---
|
||||
|
||||
## Mode: Update an Existing PR
|
||||
|
||||
### Step 1: Identify the PR
|
||||
|
||||
1. **If a PR number or URL is given**: use that directly.
|
||||
2. **If on a branch with an open PR**: auto-detect:
|
||||
```bash
|
||||
gh pr view --json number,title,body,labels,state,author,url,headRefName 2>/dev/null
|
||||
```
|
||||
3. **If neither**: ask the user for the PR number.
|
||||
|
||||
Verify the current user is the PR author:
|
||||
```bash
|
||||
CURRENT_USER=$(gh api user --jq '.login')
|
||||
PR_AUTHOR=$(gh pr view <number> --json author --jq '.author.login')
|
||||
```
|
||||
If not the author, stop and inform the user.
|
||||
|
||||
### Step 2: Fetch Current State
|
||||
|
||||
```bash
|
||||
gh pr view <number> --json number,title,body,labels,state,baseRefName,headRefName,url,author,reviewDecision,statusCheckRollup,commits
|
||||
```
|
||||
|
||||
Display a summary:
|
||||
```
|
||||
## PR #<number>: <title>
|
||||
**State**: <open/closed/merged>
|
||||
**Branch**: <head> -> <base>
|
||||
**Labels**: <label list>
|
||||
**Checks**: <pass/fail/pending>
|
||||
**URL**: <url>
|
||||
```
|
||||
|
||||
### Step 3: Determine What to Update
|
||||
|
||||
Support these operations:
|
||||
|
||||
| Operation | How |
|
||||
|---|---|
|
||||
| **Edit title** | `gh pr edit <number> --title "<new title>"` |
|
||||
| **Edit full body** | `gh pr edit <number> --body "<new body>"` |
|
||||
| **Add labels** | `gh pr edit <number> --add-label "<label1>,<label2>"` |
|
||||
| **Remove labels** | `gh pr edit <number> --remove-label "<label1>"` |
|
||||
| **Edit specific section** | Parse body by `## ` headers, modify target section, re-submit full body |
|
||||
| **Add a comment** | `gh pr comment <number> --body "<comment>"` |
|
||||
| **Link an issue** | Edit the linked-issue section in the body |
|
||||
| **Smart update after new commits** | Re-analyze and suggest section updates |
|
||||
|
||||
### Step 4: Handle Body Section Edits
|
||||
|
||||
When editing a specific section:
|
||||
|
||||
1. Parse the current PR body into sections by `## ` headers
|
||||
2. Match the user's request to the corresponding section from the template
|
||||
3. Show the current content of that section and the proposed replacement
|
||||
4. On confirmation, modify only that section, reconstruct the full body, and submit
|
||||
|
||||
### Step 5: Smart Update After New Commits
|
||||
|
||||
When the user wants to sync the PR description after pushing new changes:
|
||||
|
||||
1. Identify new commits:
|
||||
```bash
|
||||
gh pr view <number> --json commits --jq '.commits[].messageHeadline'
|
||||
git log <base>..<head> --oneline
|
||||
git diff <base>...<head> --stat
|
||||
```
|
||||
|
||||
2. Re-read the PR template. Analyze which sections are now stale based on the new changes — use the template's section names and field descriptions to identify what needs updating rather than relying on hardcoded assumptions.
|
||||
|
||||
3. Present proposed updates section-by-section and confirm before applying.
|
||||
|
||||
### Step 6: Apply Updates
|
||||
|
||||
For title/label changes, use direct `gh pr edit` flags.
|
||||
|
||||
For body edits, use a HEREDOC:
|
||||
```bash
|
||||
gh pr edit <number> --body "$(cat <<'PR_BODY_EOF'
|
||||
<full updated body>
|
||||
PR_BODY_EOF
|
||||
)"
|
||||
```
|
||||
|
||||
For comments:
|
||||
```bash
|
||||
gh pr comment <number> --body "$(cat <<'COMMENT_EOF'
|
||||
<comment text>
|
||||
COMMENT_EOF
|
||||
)"
|
||||
```
|
||||
|
||||
### Step 7: Confirm
|
||||
|
||||
Fetch and display the updated state:
|
||||
```bash
|
||||
gh pr view <number> --json number,title,labels,url
|
||||
```
|
||||
|
||||
Return the PR URL.
|
||||
|
||||
---
|
||||
|
||||
## Important Rules
|
||||
|
||||
- **Always read `.github/pull_request_template.md`** before filling or editing a PR body. Never assume section names, fields, or structure — derive everything from the template. It's the source of truth and may change.
|
||||
- **For updates, only modify requested sections.** Preserve everything else exactly as-is.
|
||||
- **Always show diffs before applying body edits.** Present current vs proposed for each changed section.
|
||||
- **Never include personal/sensitive data** in PR content per ZeroClaw's privacy contract.
|
||||
- **For label changes**, only use labels that exist in the repository. Check with `gh label list` if unsure.
|
||||
- **Fetch the latest body before editing** to avoid clobbering concurrent changes.
|
||||
- **For new PRs**, push the branch before creating (with `-u` to set upstream tracking).
|
||||
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@@ -0,0 +1,485 @@
|
||||
---
|
||||
name: skill-creator
|
||||
description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
|
||||
---
|
||||
|
||||
# Skill Creator
|
||||
|
||||
A skill for creating new skills and iteratively improving them.
|
||||
|
||||
At a high level, the process of creating a skill goes like this:
|
||||
|
||||
- Decide what you want the skill to do and roughly how it should do it
|
||||
- Write a draft of the skill
|
||||
- Create a few test prompts and run claude-with-access-to-the-skill on them
|
||||
- Help the user evaluate the results both qualitatively and quantitatively
|
||||
- While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
|
||||
- Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
|
||||
- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
|
||||
- Repeat until you're satisfied
|
||||
- Expand the test set and try again at larger scale
|
||||
|
||||
Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
|
||||
|
||||
On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
|
||||
|
||||
Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
|
||||
|
||||
Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
|
||||
|
||||
Cool? Cool.
|
||||
|
||||
## Communicating with the user
|
||||
|
||||
The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
|
||||
|
||||
So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
|
||||
|
||||
- "evaluation" and "benchmark" are borderline, but OK
|
||||
- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
|
||||
|
||||
It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
|
||||
|
||||
---
|
||||
|
||||
## Creating a skill
|
||||
|
||||
### Capture Intent
|
||||
|
||||
Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
|
||||
|
||||
1. What should this skill enable Claude to do?
|
||||
2. When should this skill trigger? (what user phrases/contexts)
|
||||
3. What's the expected output format?
|
||||
4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
|
||||
|
||||
### Interview and Research
|
||||
|
||||
Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
|
||||
|
||||
Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
|
||||
|
||||
### Write the SKILL.md
|
||||
|
||||
Based on the user interview, fill in these components:
|
||||
|
||||
- **name**: Skill identifier
|
||||
- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
|
||||
- **compatibility**: Required tools, dependencies (optional, rarely needed)
|
||||
- **the rest of the skill :)**
|
||||
|
||||
### Skill Writing Guide
|
||||
|
||||
#### Anatomy of a Skill
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
│ ├── YAML frontmatter (name, description required)
|
||||
│ └── Markdown instructions
|
||||
└── Bundled Resources (optional)
|
||||
├── scripts/ - Executable code for deterministic/repetitive tasks
|
||||
├── references/ - Docs loaded into context as needed
|
||||
└── assets/ - Files used in output (templates, icons, fonts)
|
||||
```
|
||||
|
||||
#### Progressive Disclosure
|
||||
|
||||
Skills use a three-level loading system:
|
||||
1. **Metadata** (name + description) - Always in context (~100 words)
|
||||
2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
|
||||
3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
|
||||
|
||||
These word counts are approximate and you can feel free to go longer if needed.
|
||||
|
||||
**Key patterns:**
|
||||
- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
|
||||
- Reference files clearly from SKILL.md with guidance on when to read them
|
||||
- For large reference files (>300 lines), include a table of contents
|
||||
|
||||
**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
|
||||
```
|
||||
cloud-deploy/
|
||||
├── SKILL.md (workflow + selection)
|
||||
└── references/
|
||||
├── aws.md
|
||||
├── gcp.md
|
||||
└── azure.md
|
||||
```
|
||||
Claude reads only the relevant reference file.
|
||||
|
||||
#### Principle of Lack of Surprise
|
||||
|
||||
This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
|
||||
|
||||
#### Writing Patterns
|
||||
|
||||
Prefer using the imperative form in instructions.
|
||||
|
||||
**Defining output formats** - You can do it like this:
|
||||
```markdown
|
||||
## Report structure
|
||||
ALWAYS use this exact template:
|
||||
# [Title]
|
||||
## Executive summary
|
||||
## Key findings
|
||||
## Recommendations
|
||||
```
|
||||
|
||||
**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
|
||||
```markdown
|
||||
## Commit message format
|
||||
**Example 1:**
|
||||
Input: Added user authentication with JWT tokens
|
||||
Output: feat(auth): implement JWT-based authentication
|
||||
```
|
||||
|
||||
### Writing Style
|
||||
|
||||
Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
|
||||
|
||||
### Test Cases
|
||||
|
||||
After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
|
||||
|
||||
Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
|
||||
|
||||
```json
|
||||
{
|
||||
"skill_name": "example-skill",
|
||||
"evals": [
|
||||
{
|
||||
"id": 1,
|
||||
"prompt": "User's task prompt",
|
||||
"expected_output": "Description of expected result",
|
||||
"files": []
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
|
||||
|
||||
## Running and evaluating test cases
|
||||
|
||||
This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
|
||||
|
||||
Put results in `<skill-name>-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
|
||||
|
||||
### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
|
||||
|
||||
For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
|
||||
|
||||
**With-skill run:**
|
||||
|
||||
```
|
||||
Execute this task:
|
||||
- Skill path: <path-to-skill>
|
||||
- Task: <eval prompt>
|
||||
- Input files: <eval files if any, or "none">
|
||||
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
|
||||
- Outputs to save: <what the user cares about — e.g., "the .docx file", "the final CSV">
|
||||
```
|
||||
|
||||
**Baseline run** (same prompt, but the baseline depends on context):
|
||||
- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
|
||||
- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r <skill-path> <workspace>/skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
|
||||
|
||||
Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
|
||||
|
||||
```json
|
||||
{
|
||||
"eval_id": 0,
|
||||
"eval_name": "descriptive-name-here",
|
||||
"prompt": "The user's task prompt",
|
||||
"assertions": []
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: While runs are in progress, draft assertions
|
||||
|
||||
Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
|
||||
|
||||
Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
|
||||
|
||||
Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
|
||||
|
||||
### Step 3: As runs complete, capture timing data
|
||||
|
||||
When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"total_tokens": 84852,
|
||||
"duration_ms": 23332,
|
||||
"total_duration_seconds": 23.3
|
||||
}
|
||||
```
|
||||
|
||||
This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
|
||||
|
||||
### Step 4: Grade, aggregate, and launch the viewer
|
||||
|
||||
Once all runs are done:
|
||||
|
||||
1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
|
||||
|
||||
2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
|
||||
```bash
|
||||
python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
|
||||
```
|
||||
This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
|
||||
Put each with_skill version before its baseline counterpart.
|
||||
|
||||
3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
|
||||
|
||||
4. **Launch the viewer** with both qualitative outputs and quantitative data:
|
||||
```bash
|
||||
nohup python <skill-creator-path>/eval-viewer/generate_review.py \
|
||||
<workspace>/iteration-N \
|
||||
--skill-name "my-skill" \
|
||||
--benchmark <workspace>/iteration-N/benchmark.json \
|
||||
> /dev/null 2>&1 &
|
||||
VIEWER_PID=$!
|
||||
```
|
||||
For iteration 2+, also pass `--previous-workspace <workspace>/iteration-<N-1>`.
|
||||
|
||||
**Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
|
||||
|
||||
Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
|
||||
|
||||
5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
|
||||
|
||||
### What the user sees in the viewer
|
||||
|
||||
The "Outputs" tab shows one test case at a time:
|
||||
- **Prompt**: the task that was given
|
||||
- **Output**: the files the skill produced, rendered inline where possible
|
||||
- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
|
||||
- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
|
||||
- **Feedback**: a textbox that auto-saves as they type
|
||||
- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
|
||||
|
||||
The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
|
||||
|
||||
Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
|
||||
|
||||
### Step 5: Read the feedback
|
||||
|
||||
When the user tells you they're done, read `feedback.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"reviews": [
|
||||
{"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
|
||||
{"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
|
||||
{"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
|
||||
],
|
||||
"status": "complete"
|
||||
}
|
||||
```
|
||||
|
||||
Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
|
||||
|
||||
Kill the viewer server when you're done with it:
|
||||
|
||||
```bash
|
||||
kill $VIEWER_PID 2>/dev/null
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Improving the skill
|
||||
|
||||
This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
|
||||
|
||||
### How to think about improvements
|
||||
|
||||
1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
|
||||
|
||||
2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
|
||||
|
||||
3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
|
||||
|
||||
4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
|
||||
|
||||
This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
|
||||
|
||||
### The iteration loop
|
||||
|
||||
After improving the skill:
|
||||
|
||||
1. Apply your improvements to the skill
|
||||
2. Rerun all test cases into a new `iteration-<N+1>/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
|
||||
3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
|
||||
4. Wait for the user to review and tell you they're done
|
||||
5. Read the new feedback, improve again, repeat
|
||||
|
||||
Keep going until:
|
||||
- The user says they're happy
|
||||
- The feedback is all empty (everything looks good)
|
||||
- You're not making meaningful progress
|
||||
|
||||
---
|
||||
|
||||
## Advanced: Blind comparison
|
||||
|
||||
For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
|
||||
|
||||
This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
|
||||
|
||||
---
|
||||
|
||||
## Description Optimization
|
||||
|
||||
The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
|
||||
|
||||
### Step 1: Generate trigger eval queries
|
||||
|
||||
Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
|
||||
|
||||
```json
|
||||
[
|
||||
{"query": "the user prompt", "should_trigger": true},
|
||||
{"query": "another prompt", "should_trigger": false}
|
||||
]
|
||||
```
|
||||
|
||||
The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
|
||||
|
||||
Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
|
||||
|
||||
Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
|
||||
|
||||
For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
|
||||
|
||||
For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
|
||||
|
||||
The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
|
||||
|
||||
### Step 2: Review with user
|
||||
|
||||
Present the eval set to the user for review using the HTML template:
|
||||
|
||||
1. Read the template from `assets/eval_review.html`
|
||||
2. Replace the placeholders:
|
||||
- `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
|
||||
- `__SKILL_NAME_PLACEHOLDER__` → the skill's name
|
||||
- `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
|
||||
3. Write to a temp file (e.g., `/tmp/eval_review_<skill-name>.html`) and open it: `open /tmp/eval_review_<skill-name>.html`
|
||||
4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
|
||||
5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
|
||||
|
||||
This step matters — bad eval queries lead to bad descriptions.
|
||||
|
||||
### Step 3: Run the optimization loop
|
||||
|
||||
Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
|
||||
|
||||
Save the eval set to the workspace, then run in the background:
|
||||
|
||||
```bash
|
||||
python -m scripts.run_loop \
|
||||
--eval-set <path-to-trigger-eval.json> \
|
||||
--skill-path <path-to-skill> \
|
||||
--model <model-id-powering-this-session> \
|
||||
--max-iterations 5 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
|
||||
|
||||
While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
|
||||
|
||||
This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
|
||||
|
||||
### How skill triggering works
|
||||
|
||||
Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
|
||||
|
||||
This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
|
||||
|
||||
### Step 4: Apply the result
|
||||
|
||||
Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
|
||||
|
||||
---
|
||||
|
||||
### Package and Present (only if `present_files` tool is available)
|
||||
|
||||
Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
|
||||
|
||||
```bash
|
||||
python -m scripts.package_skill <path/to/skill-folder>
|
||||
```
|
||||
|
||||
After packaging, direct the user to the resulting `.skill` file path so they can install it.
|
||||
|
||||
---
|
||||
|
||||
## Claude.ai-specific instructions
|
||||
|
||||
In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
|
||||
|
||||
**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
|
||||
|
||||
**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
|
||||
|
||||
**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
|
||||
|
||||
**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
|
||||
|
||||
**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
|
||||
|
||||
**Blind comparison**: Requires subagents. Skip it.
|
||||
|
||||
**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
|
||||
|
||||
**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
|
||||
- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
|
||||
- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
|
||||
- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
|
||||
|
||||
---
|
||||
|
||||
## Cowork-Specific Instructions
|
||||
|
||||
If you're in Cowork, the main things to know are:
|
||||
|
||||
- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
|
||||
- You don't have a browser or display, so when generating the eval viewer, use `--static <output_path>` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
|
||||
- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
|
||||
- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
|
||||
- Packaging works — `package_skill.py` just needs Python and a filesystem.
|
||||
- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
|
||||
- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
|
||||
|
||||
---
|
||||
|
||||
## Reference files
|
||||
|
||||
The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
|
||||
|
||||
- `agents/grader.md` — How to evaluate assertions against outputs
|
||||
- `agents/comparator.md` — How to do blind A/B comparison between two outputs
|
||||
- `agents/analyzer.md` — How to analyze why one version beat another
|
||||
|
||||
The references/ directory has additional documentation:
|
||||
- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
|
||||
|
||||
---
|
||||
|
||||
Repeating one more time the core loop here for emphasis:
|
||||
|
||||
- Figure out what the skill is about
|
||||
- Draft or edit the skill
|
||||
- Run claude-with-access-to-the-skill on test prompts
|
||||
- With the user, evaluate the outputs:
|
||||
- Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
|
||||
- Run quantitative evals
|
||||
- Repeat until you and the user are satisfied
|
||||
- Package the final skill and return it to the user.
|
||||
|
||||
Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
|
||||
|
||||
Good luck!
|
||||
@@ -0,0 +1,274 @@
|
||||
# Post-hoc Analyzer Agent
|
||||
|
||||
Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
|
||||
|
||||
## Role
|
||||
|
||||
After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
|
||||
|
||||
## Inputs
|
||||
|
||||
You receive these parameters in your prompt:
|
||||
|
||||
- **winner**: "A" or "B" (from blind comparison)
|
||||
- **winner_skill_path**: Path to the skill that produced the winning output
|
||||
- **winner_transcript_path**: Path to the execution transcript for the winner
|
||||
- **loser_skill_path**: Path to the skill that produced the losing output
|
||||
- **loser_transcript_path**: Path to the execution transcript for the loser
|
||||
- **comparison_result_path**: Path to the blind comparator's output JSON
|
||||
- **output_path**: Where to save the analysis results
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Read Comparison Result
|
||||
|
||||
1. Read the blind comparator's output at comparison_result_path
|
||||
2. Note the winning side (A or B), the reasoning, and any scores
|
||||
3. Understand what the comparator valued in the winning output
|
||||
|
||||
### Step 2: Read Both Skills
|
||||
|
||||
1. Read the winner skill's SKILL.md and key referenced files
|
||||
2. Read the loser skill's SKILL.md and key referenced files
|
||||
3. Identify structural differences:
|
||||
- Instructions clarity and specificity
|
||||
- Script/tool usage patterns
|
||||
- Example coverage
|
||||
- Edge case handling
|
||||
|
||||
### Step 3: Read Both Transcripts
|
||||
|
||||
1. Read the winner's transcript
|
||||
2. Read the loser's transcript
|
||||
3. Compare execution patterns:
|
||||
- How closely did each follow their skill's instructions?
|
||||
- What tools were used differently?
|
||||
- Where did the loser diverge from optimal behavior?
|
||||
- Did either encounter errors or make recovery attempts?
|
||||
|
||||
### Step 4: Analyze Instruction Following
|
||||
|
||||
For each transcript, evaluate:
|
||||
- Did the agent follow the skill's explicit instructions?
|
||||
- Did the agent use the skill's provided tools/scripts?
|
||||
- Were there missed opportunities to leverage skill content?
|
||||
- Did the agent add unnecessary steps not in the skill?
|
||||
|
||||
Score instruction following 1-10 and note specific issues.
|
||||
|
||||
### Step 5: Identify Winner Strengths
|
||||
|
||||
Determine what made the winner better:
|
||||
- Clearer instructions that led to better behavior?
|
||||
- Better scripts/tools that produced better output?
|
||||
- More comprehensive examples that guided edge cases?
|
||||
- Better error handling guidance?
|
||||
|
||||
Be specific. Quote from skills/transcripts where relevant.
|
||||
|
||||
### Step 6: Identify Loser Weaknesses
|
||||
|
||||
Determine what held the loser back:
|
||||
- Ambiguous instructions that led to suboptimal choices?
|
||||
- Missing tools/scripts that forced workarounds?
|
||||
- Gaps in edge case coverage?
|
||||
- Poor error handling that caused failures?
|
||||
|
||||
### Step 7: Generate Improvement Suggestions
|
||||
|
||||
Based on the analysis, produce actionable suggestions for improving the loser skill:
|
||||
- Specific instruction changes to make
|
||||
- Tools/scripts to add or modify
|
||||
- Examples to include
|
||||
- Edge cases to address
|
||||
|
||||
Prioritize by impact. Focus on changes that would have changed the outcome.
|
||||
|
||||
### Step 8: Write Analysis Results
|
||||
|
||||
Save structured analysis to `{output_path}`.
|
||||
|
||||
## Output Format
|
||||
|
||||
Write a JSON file with this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"comparison_summary": {
|
||||
"winner": "A",
|
||||
"winner_skill": "path/to/winner/skill",
|
||||
"loser_skill": "path/to/loser/skill",
|
||||
"comparator_reasoning": "Brief summary of why comparator chose winner"
|
||||
},
|
||||
"winner_strengths": [
|
||||
"Clear step-by-step instructions for handling multi-page documents",
|
||||
"Included validation script that caught formatting errors",
|
||||
"Explicit guidance on fallback behavior when OCR fails"
|
||||
],
|
||||
"loser_weaknesses": [
|
||||
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
|
||||
"No script for validation, agent had to improvise and made errors",
|
||||
"No guidance on OCR failure, agent gave up instead of trying alternatives"
|
||||
],
|
||||
"instruction_following": {
|
||||
"winner": {
|
||||
"score": 9,
|
||||
"issues": [
|
||||
"Minor: skipped optional logging step"
|
||||
]
|
||||
},
|
||||
"loser": {
|
||||
"score": 6,
|
||||
"issues": [
|
||||
"Did not use the skill's formatting template",
|
||||
"Invented own approach instead of following step 3",
|
||||
"Missed the 'always validate output' instruction"
|
||||
]
|
||||
}
|
||||
},
|
||||
"improvement_suggestions": [
|
||||
{
|
||||
"priority": "high",
|
||||
"category": "instructions",
|
||||
"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
|
||||
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
|
||||
},
|
||||
{
|
||||
"priority": "high",
|
||||
"category": "tools",
|
||||
"suggestion": "Add validate_output.py script similar to winner skill's validation approach",
|
||||
"expected_impact": "Would catch formatting errors before final output"
|
||||
},
|
||||
{
|
||||
"priority": "medium",
|
||||
"category": "error_handling",
|
||||
"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
|
||||
"expected_impact": "Would prevent early failure on difficult documents"
|
||||
}
|
||||
],
|
||||
"transcript_insights": {
|
||||
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
|
||||
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
|
||||
- **Be actionable**: Suggestions should be concrete changes, not vague advice
|
||||
- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
|
||||
- **Prioritize by impact**: Which changes would most likely have changed the outcome?
|
||||
- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
|
||||
- **Stay objective**: Analyze what happened, don't editorialize
|
||||
- **Think about generalization**: Would this improvement help on other evals too?
|
||||
|
||||
## Categories for Suggestions
|
||||
|
||||
Use these categories to organize improvement suggestions:
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `instructions` | Changes to the skill's prose instructions |
|
||||
| `tools` | Scripts, templates, or utilities to add/modify |
|
||||
| `examples` | Example inputs/outputs to include |
|
||||
| `error_handling` | Guidance for handling failures |
|
||||
| `structure` | Reorganization of skill content |
|
||||
| `references` | External docs or resources to add |
|
||||
|
||||
## Priority Levels
|
||||
|
||||
- **high**: Would likely change the outcome of this comparison
|
||||
- **medium**: Would improve quality but may not change win/loss
|
||||
- **low**: Nice to have, marginal improvement
|
||||
|
||||
---
|
||||
|
||||
# Analyzing Benchmark Results
|
||||
|
||||
When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
|
||||
|
||||
## Role
|
||||
|
||||
Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
|
||||
|
||||
## Inputs
|
||||
|
||||
You receive these parameters in your prompt:
|
||||
|
||||
- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
|
||||
- **skill_path**: Path to the skill being benchmarked
|
||||
- **output_path**: Where to save the notes (as JSON array of strings)
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Read Benchmark Data
|
||||
|
||||
1. Read the benchmark.json containing all run results
|
||||
2. Note the configurations tested (with_skill, without_skill)
|
||||
3. Understand the run_summary aggregates already calculated
|
||||
|
||||
### Step 2: Analyze Per-Assertion Patterns
|
||||
|
||||
For each expectation across all runs:
|
||||
- Does it **always pass** in both configurations? (may not differentiate skill value)
|
||||
- Does it **always fail** in both configurations? (may be broken or beyond capability)
|
||||
- Does it **always pass with skill but fail without**? (skill clearly adds value here)
|
||||
- Does it **always fail with skill but pass without**? (skill may be hurting)
|
||||
- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
|
||||
|
||||
### Step 3: Analyze Cross-Eval Patterns
|
||||
|
||||
Look for patterns across evals:
|
||||
- Are certain eval types consistently harder/easier?
|
||||
- Do some evals show high variance while others are stable?
|
||||
- Are there surprising results that contradict expectations?
|
||||
|
||||
### Step 4: Analyze Metrics Patterns
|
||||
|
||||
Look at time_seconds, tokens, tool_calls:
|
||||
- Does the skill significantly increase execution time?
|
||||
- Is there high variance in resource usage?
|
||||
- Are there outlier runs that skew the aggregates?
|
||||
|
||||
### Step 5: Generate Notes
|
||||
|
||||
Write freeform observations as a list of strings. Each note should:
|
||||
- State a specific observation
|
||||
- Be grounded in the data (not speculation)
|
||||
- Help the user understand something the aggregate metrics don't show
|
||||
|
||||
Examples:
|
||||
- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
|
||||
- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
|
||||
- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
|
||||
- "Skill adds 13s average execution time but improves pass rate by 50%"
|
||||
- "Token usage is 80% higher with skill, primarily due to script output parsing"
|
||||
- "All 3 without-skill runs for eval 1 produced empty output"
|
||||
|
||||
### Step 6: Write Notes
|
||||
|
||||
Save notes to `{output_path}` as a JSON array of strings:
|
||||
|
||||
```json
|
||||
[
|
||||
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
|
||||
"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
|
||||
"Without-skill runs consistently fail on table extraction expectations",
|
||||
"Skill adds 13s average execution time but improves pass rate by 50%"
|
||||
]
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
**DO:**
|
||||
- Report what you observe in the data
|
||||
- Be specific about which evals, expectations, or runs you're referring to
|
||||
- Note patterns that aggregate metrics would hide
|
||||
- Provide context that helps interpret the numbers
|
||||
|
||||
**DO NOT:**
|
||||
- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
|
||||
- Make subjective quality judgments ("the output was good/bad")
|
||||
- Speculate about causes without evidence
|
||||
- Repeat information already in the run_summary aggregates
|
||||
@@ -0,0 +1,202 @@
|
||||
# Blind Comparator Agent
|
||||
|
||||
Compare two outputs WITHOUT knowing which skill produced them.
|
||||
|
||||
## Role
|
||||
|
||||
The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
|
||||
|
||||
Your judgment is based purely on output quality and task completion.
|
||||
|
||||
## Inputs
|
||||
|
||||
You receive these parameters in your prompt:
|
||||
|
||||
- **output_a_path**: Path to the first output file or directory
|
||||
- **output_b_path**: Path to the second output file or directory
|
||||
- **eval_prompt**: The original task/prompt that was executed
|
||||
- **expectations**: List of expectations to check (optional - may be empty)
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Read Both Outputs
|
||||
|
||||
1. Examine output A (file or directory)
|
||||
2. Examine output B (file or directory)
|
||||
3. Note the type, structure, and content of each
|
||||
4. If outputs are directories, examine all relevant files inside
|
||||
|
||||
### Step 2: Understand the Task
|
||||
|
||||
1. Read the eval_prompt carefully
|
||||
2. Identify what the task requires:
|
||||
- What should be produced?
|
||||
- What qualities matter (accuracy, completeness, format)?
|
||||
- What would distinguish a good output from a poor one?
|
||||
|
||||
### Step 3: Generate Evaluation Rubric
|
||||
|
||||
Based on the task, generate a rubric with two dimensions:
|
||||
|
||||
**Content Rubric** (what the output contains):
|
||||
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|
||||
|-----------|----------|----------------|---------------|
|
||||
| Correctness | Major errors | Minor errors | Fully correct |
|
||||
| Completeness | Missing key elements | Mostly complete | All elements present |
|
||||
| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
|
||||
|
||||
**Structure Rubric** (how the output is organized):
|
||||
| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
|
||||
|-----------|----------|----------------|---------------|
|
||||
| Organization | Disorganized | Reasonably organized | Clear, logical structure |
|
||||
| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
|
||||
| Usability | Difficult to use | Usable with effort | Easy to use |
|
||||
|
||||
Adapt criteria to the specific task. For example:
|
||||
- PDF form → "Field alignment", "Text readability", "Data placement"
|
||||
- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
|
||||
- Data output → "Schema correctness", "Data types", "Completeness"
|
||||
|
||||
### Step 4: Evaluate Each Output Against the Rubric
|
||||
|
||||
For each output (A and B):
|
||||
|
||||
1. **Score each criterion** on the rubric (1-5 scale)
|
||||
2. **Calculate dimension totals**: Content score, Structure score
|
||||
3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
|
||||
|
||||
### Step 5: Check Assertions (if provided)
|
||||
|
||||
If expectations are provided:
|
||||
|
||||
1. Check each expectation against output A
|
||||
2. Check each expectation against output B
|
||||
3. Count pass rates for each output
|
||||
4. Use expectation scores as secondary evidence (not the primary decision factor)
|
||||
|
||||
### Step 6: Determine the Winner
|
||||
|
||||
Compare A and B based on (in priority order):
|
||||
|
||||
1. **Primary**: Overall rubric score (content + structure)
|
||||
2. **Secondary**: Assertion pass rates (if applicable)
|
||||
3. **Tiebreaker**: If truly equal, declare a TIE
|
||||
|
||||
Be decisive - ties should be rare. One output is usually better, even if marginally.
|
||||
|
||||
### Step 7: Write Comparison Results
|
||||
|
||||
Save results to a JSON file at the path specified (or `comparison.json` if not specified).
|
||||
|
||||
## Output Format
|
||||
|
||||
Write a JSON file with this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"winner": "A",
|
||||
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
|
||||
"rubric": {
|
||||
"A": {
|
||||
"content": {
|
||||
"correctness": 5,
|
||||
"completeness": 5,
|
||||
"accuracy": 4
|
||||
},
|
||||
"structure": {
|
||||
"organization": 4,
|
||||
"formatting": 5,
|
||||
"usability": 4
|
||||
},
|
||||
"content_score": 4.7,
|
||||
"structure_score": 4.3,
|
||||
"overall_score": 9.0
|
||||
},
|
||||
"B": {
|
||||
"content": {
|
||||
"correctness": 3,
|
||||
"completeness": 2,
|
||||
"accuracy": 3
|
||||
},
|
||||
"structure": {
|
||||
"organization": 3,
|
||||
"formatting": 2,
|
||||
"usability": 3
|
||||
},
|
||||
"content_score": 2.7,
|
||||
"structure_score": 2.7,
|
||||
"overall_score": 5.4
|
||||
}
|
||||
},
|
||||
"output_quality": {
|
||||
"A": {
|
||||
"score": 9,
|
||||
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
|
||||
"weaknesses": ["Minor style inconsistency in header"]
|
||||
},
|
||||
"B": {
|
||||
"score": 5,
|
||||
"strengths": ["Readable output", "Correct basic structure"],
|
||||
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
|
||||
}
|
||||
},
|
||||
"expectation_results": {
|
||||
"A": {
|
||||
"passed": 4,
|
||||
"total": 5,
|
||||
"pass_rate": 0.80,
|
||||
"details": [
|
||||
{"text": "Output includes name", "passed": true},
|
||||
{"text": "Output includes date", "passed": true},
|
||||
{"text": "Format is PDF", "passed": true},
|
||||
{"text": "Contains signature", "passed": false},
|
||||
{"text": "Readable text", "passed": true}
|
||||
]
|
||||
},
|
||||
"B": {
|
||||
"passed": 3,
|
||||
"total": 5,
|
||||
"pass_rate": 0.60,
|
||||
"details": [
|
||||
{"text": "Output includes name", "passed": true},
|
||||
{"text": "Output includes date", "passed": false},
|
||||
{"text": "Format is PDF", "passed": true},
|
||||
{"text": "Contains signature", "passed": false},
|
||||
{"text": "Readable text", "passed": true}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If no expectations were provided, omit the `expectation_results` field entirely.
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
- **winner**: "A", "B", or "TIE"
|
||||
- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
|
||||
- **rubric**: Structured rubric evaluation for each output
|
||||
- **content**: Scores for content criteria (correctness, completeness, accuracy)
|
||||
- **structure**: Scores for structure criteria (organization, formatting, usability)
|
||||
- **content_score**: Average of content criteria (1-5)
|
||||
- **structure_score**: Average of structure criteria (1-5)
|
||||
- **overall_score**: Combined score scaled to 1-10
|
||||
- **output_quality**: Summary quality assessment
|
||||
- **score**: 1-10 rating (should match rubric overall_score)
|
||||
- **strengths**: List of positive aspects
|
||||
- **weaknesses**: List of issues or shortcomings
|
||||
- **expectation_results**: (Only if expectations provided)
|
||||
- **passed**: Number of expectations that passed
|
||||
- **total**: Total number of expectations
|
||||
- **pass_rate**: Fraction passed (0.0 to 1.0)
|
||||
- **details**: Individual expectation results
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
|
||||
- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
|
||||
- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
|
||||
- **Output quality first**: Assertion scores are secondary to overall task completion.
|
||||
- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
|
||||
- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
|
||||
- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.
|
||||
@@ -0,0 +1,223 @@
|
||||
# Grader Agent
|
||||
|
||||
Evaluate expectations against an execution transcript and outputs.
|
||||
|
||||
## Role
|
||||
|
||||
The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
|
||||
|
||||
You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
|
||||
|
||||
## Inputs
|
||||
|
||||
You receive these parameters in your prompt:
|
||||
|
||||
- **expectations**: List of expectations to evaluate (strings)
|
||||
- **transcript_path**: Path to the execution transcript (markdown file)
|
||||
- **outputs_dir**: Directory containing output files from execution
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Read the Transcript
|
||||
|
||||
1. Read the transcript file completely
|
||||
2. Note the eval prompt, execution steps, and final result
|
||||
3. Identify any issues or errors documented
|
||||
|
||||
### Step 2: Examine Output Files
|
||||
|
||||
1. List files in outputs_dir
|
||||
2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
|
||||
3. Note contents, structure, and quality
|
||||
|
||||
### Step 3: Evaluate Each Assertion
|
||||
|
||||
For each expectation:
|
||||
|
||||
1. **Search for evidence** in the transcript and outputs
|
||||
2. **Determine verdict**:
|
||||
- **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
|
||||
- **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
|
||||
3. **Cite the evidence**: Quote the specific text or describe what you found
|
||||
|
||||
### Step 4: Extract and Verify Claims
|
||||
|
||||
Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
|
||||
|
||||
1. **Extract claims** from the transcript and outputs:
|
||||
- Factual statements ("The form has 12 fields")
|
||||
- Process claims ("Used pypdf to fill the form")
|
||||
- Quality claims ("All fields were filled correctly")
|
||||
|
||||
2. **Verify each claim**:
|
||||
- **Factual claims**: Can be checked against the outputs or external sources
|
||||
- **Process claims**: Can be verified from the transcript
|
||||
- **Quality claims**: Evaluate whether the claim is justified
|
||||
|
||||
3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
|
||||
|
||||
This catches issues that predefined expectations might miss.
|
||||
|
||||
### Step 5: Read User Notes
|
||||
|
||||
If `{outputs_dir}/user_notes.md` exists:
|
||||
1. Read it and note any uncertainties or issues flagged by the executor
|
||||
2. Include relevant concerns in the grading output
|
||||
3. These may reveal problems even when expectations pass
|
||||
|
||||
### Step 6: Critique the Evals
|
||||
|
||||
After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
|
||||
|
||||
Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
|
||||
|
||||
Suggestions worth raising:
|
||||
- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
|
||||
- An important outcome you observed — good or bad — that no assertion covers at all
|
||||
- An assertion that can't actually be verified from the available outputs
|
||||
|
||||
Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
|
||||
|
||||
### Step 7: Write Grading Results
|
||||
|
||||
Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
|
||||
|
||||
## Grading Criteria
|
||||
|
||||
**PASS when**:
|
||||
- The transcript or outputs clearly demonstrate the expectation is true
|
||||
- Specific evidence can be cited
|
||||
- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
|
||||
|
||||
**FAIL when**:
|
||||
- No evidence found for the expectation
|
||||
- Evidence contradicts the expectation
|
||||
- The expectation cannot be verified from available information
|
||||
- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
|
||||
- The output appears to meet the assertion by coincidence rather than by actually doing the work
|
||||
|
||||
**When uncertain**: The burden of proof to pass is on the expectation.
|
||||
|
||||
### Step 8: Read Executor Metrics and Timing
|
||||
|
||||
1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
|
||||
2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
|
||||
|
||||
## Output Format
|
||||
|
||||
Write a JSON file with this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"expectations": [
|
||||
{
|
||||
"text": "The output includes the name 'John Smith'",
|
||||
"passed": true,
|
||||
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
|
||||
},
|
||||
{
|
||||
"text": "The spreadsheet has a SUM formula in cell B10",
|
||||
"passed": false,
|
||||
"evidence": "No spreadsheet was created. The output was a text file."
|
||||
},
|
||||
{
|
||||
"text": "The assistant used the skill's OCR script",
|
||||
"passed": true,
|
||||
"evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"passed": 2,
|
||||
"failed": 1,
|
||||
"total": 3,
|
||||
"pass_rate": 0.67
|
||||
},
|
||||
"execution_metrics": {
|
||||
"tool_calls": {
|
||||
"Read": 5,
|
||||
"Write": 2,
|
||||
"Bash": 8
|
||||
},
|
||||
"total_tool_calls": 15,
|
||||
"total_steps": 6,
|
||||
"errors_encountered": 0,
|
||||
"output_chars": 12450,
|
||||
"transcript_chars": 3200
|
||||
},
|
||||
"timing": {
|
||||
"executor_duration_seconds": 165.0,
|
||||
"grader_duration_seconds": 26.0,
|
||||
"total_duration_seconds": 191.0
|
||||
},
|
||||
"claims": [
|
||||
{
|
||||
"claim": "The form has 12 fillable fields",
|
||||
"type": "factual",
|
||||
"verified": true,
|
||||
"evidence": "Counted 12 fields in field_info.json"
|
||||
},
|
||||
{
|
||||
"claim": "All required fields were populated",
|
||||
"type": "quality",
|
||||
"verified": false,
|
||||
"evidence": "Reference section was left blank despite data being available"
|
||||
}
|
||||
],
|
||||
"user_notes_summary": {
|
||||
"uncertainties": ["Used 2023 data, may be stale"],
|
||||
"needs_review": [],
|
||||
"workarounds": ["Fell back to text overlay for non-fillable fields"]
|
||||
},
|
||||
"eval_feedback": {
|
||||
"suggestions": [
|
||||
{
|
||||
"assertion": "The output includes the name 'John Smith'",
|
||||
"reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
|
||||
},
|
||||
{
|
||||
"reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
|
||||
}
|
||||
],
|
||||
"overall": "Assertions check presence but not correctness. Consider adding content verification."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
- **expectations**: Array of graded expectations
|
||||
- **text**: The original expectation text
|
||||
- **passed**: Boolean - true if expectation passes
|
||||
- **evidence**: Specific quote or description supporting the verdict
|
||||
- **summary**: Aggregate statistics
|
||||
- **passed**: Count of passed expectations
|
||||
- **failed**: Count of failed expectations
|
||||
- **total**: Total expectations evaluated
|
||||
- **pass_rate**: Fraction passed (0.0 to 1.0)
|
||||
- **execution_metrics**: Copied from executor's metrics.json (if available)
|
||||
- **output_chars**: Total character count of output files (proxy for tokens)
|
||||
- **transcript_chars**: Character count of transcript
|
||||
- **timing**: Wall clock timing from timing.json (if available)
|
||||
- **executor_duration_seconds**: Time spent in executor subagent
|
||||
- **total_duration_seconds**: Total elapsed time for the run
|
||||
- **claims**: Extracted and verified claims from the output
|
||||
- **claim**: The statement being verified
|
||||
- **type**: "factual", "process", or "quality"
|
||||
- **verified**: Boolean - whether the claim holds
|
||||
- **evidence**: Supporting or contradicting evidence
|
||||
- **user_notes_summary**: Issues flagged by the executor
|
||||
- **uncertainties**: Things the executor wasn't sure about
|
||||
- **needs_review**: Items requiring human attention
|
||||
- **workarounds**: Places where the skill didn't work as expected
|
||||
- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
|
||||
- **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
|
||||
- **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Be objective**: Base verdicts on evidence, not assumptions
|
||||
- **Be specific**: Quote the exact text that supports your verdict
|
||||
- **Be thorough**: Check both transcript and output files
|
||||
- **Be consistent**: Apply the same standard to each expectation
|
||||
- **Explain failures**: Make it clear why evidence was insufficient
|
||||
- **No partial credit**: Each expectation is pass or fail, not partial
|
||||
@@ -0,0 +1,146 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Eval Set Review - __SKILL_NAME_PLACEHOLDER__</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||
<style>
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
body { font-family: 'Lora', Georgia, serif; background: #faf9f5; padding: 2rem; color: #141413; }
|
||||
h1 { font-family: 'Poppins', sans-serif; margin-bottom: 0.5rem; font-size: 1.5rem; }
|
||||
.description { color: #b0aea5; margin-bottom: 1.5rem; font-style: italic; max-width: 900px; }
|
||||
.controls { margin-bottom: 1rem; display: flex; gap: 0.5rem; }
|
||||
.btn { font-family: 'Poppins', sans-serif; padding: 0.5rem 1rem; border: none; border-radius: 6px; cursor: pointer; font-size: 0.875rem; font-weight: 500; }
|
||||
.btn-add { background: #6a9bcc; color: white; }
|
||||
.btn-add:hover { background: #5889b8; }
|
||||
.btn-export { background: #d97757; color: white; }
|
||||
.btn-export:hover { background: #c4613f; }
|
||||
table { width: 100%; max-width: 1100px; border-collapse: collapse; background: white; border-radius: 6px; overflow: hidden; box-shadow: 0 1px 3px rgba(0,0,0,0.08); }
|
||||
th { font-family: 'Poppins', sans-serif; background: #141413; color: #faf9f5; padding: 0.75rem 1rem; text-align: left; font-size: 0.875rem; }
|
||||
td { padding: 0.75rem 1rem; border-bottom: 1px solid #e8e6dc; vertical-align: top; }
|
||||
tr:nth-child(even) td { background: #faf9f5; }
|
||||
tr:hover td { background: #f3f1ea; }
|
||||
.section-header td { background: #e8e6dc; font-family: 'Poppins', sans-serif; font-weight: 500; font-size: 0.8rem; color: #141413; text-transform: uppercase; letter-spacing: 0.05em; }
|
||||
.query-input { width: 100%; padding: 0.4rem; border: 1px solid #e8e6dc; border-radius: 4px; font-size: 0.875rem; font-family: 'Lora', Georgia, serif; resize: vertical; min-height: 60px; }
|
||||
.query-input:focus { outline: none; border-color: #d97757; box-shadow: 0 0 0 2px rgba(217,119,87,0.15); }
|
||||
.toggle { position: relative; display: inline-block; width: 44px; height: 24px; }
|
||||
.toggle input { opacity: 0; width: 0; height: 0; }
|
||||
.toggle .slider { position: absolute; inset: 0; background: #b0aea5; border-radius: 24px; cursor: pointer; transition: 0.2s; }
|
||||
.toggle .slider::before { content: ""; position: absolute; width: 18px; height: 18px; left: 3px; bottom: 3px; background: white; border-radius: 50%; transition: 0.2s; }
|
||||
.toggle input:checked + .slider { background: #d97757; }
|
||||
.toggle input:checked + .slider::before { transform: translateX(20px); }
|
||||
.btn-delete { background: #c44; color: white; padding: 0.3rem 0.6rem; border: none; border-radius: 4px; cursor: pointer; font-size: 0.75rem; font-family: 'Poppins', sans-serif; }
|
||||
.btn-delete:hover { background: #a33; }
|
||||
.summary { margin-top: 1rem; color: #b0aea5; font-size: 0.875rem; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Eval Set Review: <span id="skill-name">__SKILL_NAME_PLACEHOLDER__</span></h1>
|
||||
<p class="description">Current description: <span id="skill-desc">__SKILL_DESCRIPTION_PLACEHOLDER__</span></p>
|
||||
|
||||
<div class="controls">
|
||||
<button class="btn btn-add" onclick="addRow()">+ Add Query</button>
|
||||
<button class="btn btn-export" onclick="exportEvalSet()">Export Eval Set</button>
|
||||
</div>
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th style="width:65%">Query</th>
|
||||
<th style="width:18%">Should Trigger</th>
|
||||
<th style="width:10%">Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="eval-body"></tbody>
|
||||
</table>
|
||||
|
||||
<p class="summary" id="summary"></p>
|
||||
|
||||
<script>
|
||||
const EVAL_DATA = __EVAL_DATA_PLACEHOLDER__;
|
||||
|
||||
let evalItems = [...EVAL_DATA];
|
||||
|
||||
function render() {
|
||||
const tbody = document.getElementById('eval-body');
|
||||
tbody.innerHTML = '';
|
||||
|
||||
// Sort: should-trigger first, then should-not-trigger
|
||||
const sorted = evalItems
|
||||
.map((item, origIdx) => ({ ...item, origIdx }))
|
||||
.sort((a, b) => (b.should_trigger ? 1 : 0) - (a.should_trigger ? 1 : 0));
|
||||
|
||||
let lastGroup = null;
|
||||
sorted.forEach(item => {
|
||||
const group = item.should_trigger ? 'trigger' : 'no-trigger';
|
||||
if (group !== lastGroup) {
|
||||
const headerRow = document.createElement('tr');
|
||||
headerRow.className = 'section-header';
|
||||
headerRow.innerHTML = `<td colspan="3">${item.should_trigger ? 'Should Trigger' : 'Should NOT Trigger'}</td>`;
|
||||
tbody.appendChild(headerRow);
|
||||
lastGroup = group;
|
||||
}
|
||||
|
||||
const idx = item.origIdx;
|
||||
const tr = document.createElement('tr');
|
||||
tr.innerHTML = `
|
||||
<td><textarea class="query-input" onchange="updateQuery(${idx}, this.value)">${escapeHtml(item.query)}</textarea></td>
|
||||
<td>
|
||||
<label class="toggle">
|
||||
<input type="checkbox" ${item.should_trigger ? 'checked' : ''} onchange="updateTrigger(${idx}, this.checked)">
|
||||
<span class="slider"></span>
|
||||
</label>
|
||||
<span style="margin-left:8px;font-size:0.8rem;color:#b0aea5">${item.should_trigger ? 'Yes' : 'No'}</span>
|
||||
</td>
|
||||
<td><button class="btn-delete" onclick="deleteRow(${idx})">Delete</button></td>
|
||||
`;
|
||||
tbody.appendChild(tr);
|
||||
});
|
||||
updateSummary();
|
||||
}
|
||||
|
||||
function escapeHtml(text) {
|
||||
const div = document.createElement('div');
|
||||
div.textContent = text;
|
||||
return div.innerHTML;
|
||||
}
|
||||
|
||||
function updateQuery(idx, value) { evalItems[idx].query = value; updateSummary(); }
|
||||
function updateTrigger(idx, value) { evalItems[idx].should_trigger = value; render(); }
|
||||
function deleteRow(idx) { evalItems.splice(idx, 1); render(); }
|
||||
|
||||
function addRow() {
|
||||
evalItems.push({ query: '', should_trigger: true });
|
||||
render();
|
||||
const inputs = document.querySelectorAll('.query-input');
|
||||
inputs[inputs.length - 1].focus();
|
||||
}
|
||||
|
||||
function updateSummary() {
|
||||
const trigger = evalItems.filter(i => i.should_trigger).length;
|
||||
const noTrigger = evalItems.filter(i => !i.should_trigger).length;
|
||||
document.getElementById('summary').textContent =
|
||||
`${evalItems.length} queries total: ${trigger} should trigger, ${noTrigger} should not trigger`;
|
||||
}
|
||||
|
||||
function exportEvalSet() {
|
||||
const valid = evalItems.filter(i => i.query.trim() !== '');
|
||||
const data = valid.map(i => ({ query: i.query.trim(), should_trigger: i.should_trigger }));
|
||||
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
|
||||
const url = URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = 'eval_set.json';
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
document.body.removeChild(a);
|
||||
URL.revokeObjectURL(url);
|
||||
}
|
||||
|
||||
render();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,471 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate and serve a review page for eval results.
|
||||
|
||||
Reads the workspace directory, discovers runs (directories with outputs/),
|
||||
embeds all output data into a self-contained HTML page, and serves it via
|
||||
a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
|
||||
|
||||
Usage:
|
||||
python generate_review.py <workspace-path> [--port PORT] [--skill-name NAME]
|
||||
python generate_review.py <workspace-path> --previous-feedback /path/to/old/feedback.json
|
||||
|
||||
No dependencies beyond the Python stdlib are required.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
import mimetypes
|
||||
import os
|
||||
import re
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import webbrowser
|
||||
from functools import partial
|
||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||
from pathlib import Path
|
||||
|
||||
# Files to exclude from output listings
|
||||
METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
|
||||
|
||||
# Extensions we render as inline text
|
||||
TEXT_EXTENSIONS = {
|
||||
".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
|
||||
".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
|
||||
".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
|
||||
}
|
||||
|
||||
# Extensions we render as inline images
|
||||
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
|
||||
|
||||
# MIME type overrides for common types
|
||||
MIME_OVERRIDES = {
|
||||
".svg": "image/svg+xml",
|
||||
".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
|
||||
".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
|
||||
".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
|
||||
}
|
||||
|
||||
|
||||
def get_mime_type(path: Path) -> str:
|
||||
ext = path.suffix.lower()
|
||||
if ext in MIME_OVERRIDES:
|
||||
return MIME_OVERRIDES[ext]
|
||||
mime, _ = mimetypes.guess_type(str(path))
|
||||
return mime or "application/octet-stream"
|
||||
|
||||
|
||||
def find_runs(workspace: Path) -> list[dict]:
|
||||
"""Recursively find directories that contain an outputs/ subdirectory."""
|
||||
runs: list[dict] = []
|
||||
_find_runs_recursive(workspace, workspace, runs)
|
||||
runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
|
||||
return runs
|
||||
|
||||
|
||||
def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
|
||||
if not current.is_dir():
|
||||
return
|
||||
|
||||
outputs_dir = current / "outputs"
|
||||
if outputs_dir.is_dir():
|
||||
run = build_run(root, current)
|
||||
if run:
|
||||
runs.append(run)
|
||||
return
|
||||
|
||||
skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
|
||||
for child in sorted(current.iterdir()):
|
||||
if child.is_dir() and child.name not in skip:
|
||||
_find_runs_recursive(root, child, runs)
|
||||
|
||||
|
||||
def build_run(root: Path, run_dir: Path) -> dict | None:
|
||||
"""Build a run dict with prompt, outputs, and grading data."""
|
||||
prompt = ""
|
||||
eval_id = None
|
||||
|
||||
# Try eval_metadata.json
|
||||
for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
|
||||
if candidate.exists():
|
||||
try:
|
||||
metadata = json.loads(candidate.read_text())
|
||||
prompt = metadata.get("prompt", "")
|
||||
eval_id = metadata.get("eval_id")
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
if prompt:
|
||||
break
|
||||
|
||||
# Fall back to transcript.md
|
||||
if not prompt:
|
||||
for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
|
||||
if candidate.exists():
|
||||
try:
|
||||
text = candidate.read_text()
|
||||
match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
|
||||
if match:
|
||||
prompt = match.group(1).strip()
|
||||
except OSError:
|
||||
pass
|
||||
if prompt:
|
||||
break
|
||||
|
||||
if not prompt:
|
||||
prompt = "(No prompt found)"
|
||||
|
||||
run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
|
||||
|
||||
# Collect output files
|
||||
outputs_dir = run_dir / "outputs"
|
||||
output_files: list[dict] = []
|
||||
if outputs_dir.is_dir():
|
||||
for f in sorted(outputs_dir.iterdir()):
|
||||
if f.is_file() and f.name not in METADATA_FILES:
|
||||
output_files.append(embed_file(f))
|
||||
|
||||
# Load grading if present
|
||||
grading = None
|
||||
for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
|
||||
if candidate.exists():
|
||||
try:
|
||||
grading = json.loads(candidate.read_text())
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
if grading:
|
||||
break
|
||||
|
||||
return {
|
||||
"id": run_id,
|
||||
"prompt": prompt,
|
||||
"eval_id": eval_id,
|
||||
"outputs": output_files,
|
||||
"grading": grading,
|
||||
}
|
||||
|
||||
|
||||
def embed_file(path: Path) -> dict:
|
||||
"""Read a file and return an embedded representation."""
|
||||
ext = path.suffix.lower()
|
||||
mime = get_mime_type(path)
|
||||
|
||||
if ext in TEXT_EXTENSIONS:
|
||||
try:
|
||||
content = path.read_text(errors="replace")
|
||||
except OSError:
|
||||
content = "(Error reading file)"
|
||||
return {
|
||||
"name": path.name,
|
||||
"type": "text",
|
||||
"content": content,
|
||||
}
|
||||
elif ext in IMAGE_EXTENSIONS:
|
||||
try:
|
||||
raw = path.read_bytes()
|
||||
b64 = base64.b64encode(raw).decode("ascii")
|
||||
except OSError:
|
||||
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||
return {
|
||||
"name": path.name,
|
||||
"type": "image",
|
||||
"mime": mime,
|
||||
"data_uri": f"data:{mime};base64,{b64}",
|
||||
}
|
||||
elif ext == ".pdf":
|
||||
try:
|
||||
raw = path.read_bytes()
|
||||
b64 = base64.b64encode(raw).decode("ascii")
|
||||
except OSError:
|
||||
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||
return {
|
||||
"name": path.name,
|
||||
"type": "pdf",
|
||||
"data_uri": f"data:{mime};base64,{b64}",
|
||||
}
|
||||
elif ext == ".xlsx":
|
||||
try:
|
||||
raw = path.read_bytes()
|
||||
b64 = base64.b64encode(raw).decode("ascii")
|
||||
except OSError:
|
||||
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||
return {
|
||||
"name": path.name,
|
||||
"type": "xlsx",
|
||||
"data_b64": b64,
|
||||
}
|
||||
else:
|
||||
# Binary / unknown — base64 download link
|
||||
try:
|
||||
raw = path.read_bytes()
|
||||
b64 = base64.b64encode(raw).decode("ascii")
|
||||
except OSError:
|
||||
return {"name": path.name, "type": "error", "content": "(Error reading file)"}
|
||||
return {
|
||||
"name": path.name,
|
||||
"type": "binary",
|
||||
"mime": mime,
|
||||
"data_uri": f"data:{mime};base64,{b64}",
|
||||
}
|
||||
|
||||
|
||||
def load_previous_iteration(workspace: Path) -> dict[str, dict]:
|
||||
"""Load previous iteration's feedback and outputs.
|
||||
|
||||
Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
|
||||
"""
|
||||
result: dict[str, dict] = {}
|
||||
|
||||
# Load feedback
|
||||
feedback_map: dict[str, str] = {}
|
||||
feedback_path = workspace / "feedback.json"
|
||||
if feedback_path.exists():
|
||||
try:
|
||||
data = json.loads(feedback_path.read_text())
|
||||
feedback_map = {
|
||||
r["run_id"]: r["feedback"]
|
||||
for r in data.get("reviews", [])
|
||||
if r.get("feedback", "").strip()
|
||||
}
|
||||
except (json.JSONDecodeError, OSError, KeyError):
|
||||
pass
|
||||
|
||||
# Load runs (to get outputs)
|
||||
prev_runs = find_runs(workspace)
|
||||
for run in prev_runs:
|
||||
result[run["id"]] = {
|
||||
"feedback": feedback_map.get(run["id"], ""),
|
||||
"outputs": run.get("outputs", []),
|
||||
}
|
||||
|
||||
# Also add feedback for run_ids that had feedback but no matching run
|
||||
for run_id, fb in feedback_map.items():
|
||||
if run_id not in result:
|
||||
result[run_id] = {"feedback": fb, "outputs": []}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def generate_html(
|
||||
runs: list[dict],
|
||||
skill_name: str,
|
||||
previous: dict[str, dict] | None = None,
|
||||
benchmark: dict | None = None,
|
||||
) -> str:
|
||||
"""Generate the complete standalone HTML page with embedded data."""
|
||||
template_path = Path(__file__).parent / "viewer.html"
|
||||
template = template_path.read_text()
|
||||
|
||||
# Build previous_feedback and previous_outputs maps for the template
|
||||
previous_feedback: dict[str, str] = {}
|
||||
previous_outputs: dict[str, list[dict]] = {}
|
||||
if previous:
|
||||
for run_id, data in previous.items():
|
||||
if data.get("feedback"):
|
||||
previous_feedback[run_id] = data["feedback"]
|
||||
if data.get("outputs"):
|
||||
previous_outputs[run_id] = data["outputs"]
|
||||
|
||||
embedded = {
|
||||
"skill_name": skill_name,
|
||||
"runs": runs,
|
||||
"previous_feedback": previous_feedback,
|
||||
"previous_outputs": previous_outputs,
|
||||
}
|
||||
if benchmark:
|
||||
embedded["benchmark"] = benchmark
|
||||
|
||||
data_json = json.dumps(embedded)
|
||||
|
||||
return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# HTTP server (stdlib only, zero dependencies)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _kill_port(port: int) -> None:
|
||||
"""Kill any process listening on the given port."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["lsof", "-ti", f":{port}"],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
for pid_str in result.stdout.strip().split("\n"):
|
||||
if pid_str.strip():
|
||||
try:
|
||||
os.kill(int(pid_str.strip()), signal.SIGTERM)
|
||||
except (ProcessLookupError, ValueError):
|
||||
pass
|
||||
if result.stdout.strip():
|
||||
time.sleep(0.5)
|
||||
except subprocess.TimeoutExpired:
|
||||
pass
|
||||
except FileNotFoundError:
|
||||
print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
|
||||
|
||||
class ReviewHandler(BaseHTTPRequestHandler):
|
||||
"""Serves the review HTML and handles feedback saves.
|
||||
|
||||
Regenerates the HTML on each page load so that refreshing the browser
|
||||
picks up new eval outputs without restarting the server.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
workspace: Path,
|
||||
skill_name: str,
|
||||
feedback_path: Path,
|
||||
previous: dict[str, dict],
|
||||
benchmark_path: Path | None,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
self.workspace = workspace
|
||||
self.skill_name = skill_name
|
||||
self.feedback_path = feedback_path
|
||||
self.previous = previous
|
||||
self.benchmark_path = benchmark_path
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def do_GET(self) -> None:
|
||||
if self.path == "/" or self.path == "/index.html":
|
||||
# Regenerate HTML on each request (re-scans workspace for new outputs)
|
||||
runs = find_runs(self.workspace)
|
||||
benchmark = None
|
||||
if self.benchmark_path and self.benchmark_path.exists():
|
||||
try:
|
||||
benchmark = json.loads(self.benchmark_path.read_text())
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
html = generate_html(runs, self.skill_name, self.previous, benchmark)
|
||||
content = html.encode("utf-8")
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", "text/html; charset=utf-8")
|
||||
self.send_header("Content-Length", str(len(content)))
|
||||
self.end_headers()
|
||||
self.wfile.write(content)
|
||||
elif self.path == "/api/feedback":
|
||||
data = b"{}"
|
||||
if self.feedback_path.exists():
|
||||
data = self.feedback_path.read_bytes()
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(data)))
|
||||
self.end_headers()
|
||||
self.wfile.write(data)
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def do_POST(self) -> None:
|
||||
if self.path == "/api/feedback":
|
||||
length = int(self.headers.get("Content-Length", 0))
|
||||
body = self.rfile.read(length)
|
||||
try:
|
||||
data = json.loads(body)
|
||||
if not isinstance(data, dict) or "reviews" not in data:
|
||||
raise ValueError("Expected JSON object with 'reviews' key")
|
||||
self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
|
||||
resp = b'{"ok":true}'
|
||||
self.send_response(200)
|
||||
except (json.JSONDecodeError, OSError, ValueError) as e:
|
||||
resp = json.dumps({"error": str(e)}).encode()
|
||||
self.send_response(500)
|
||||
self.send_header("Content-Type", "application/json")
|
||||
self.send_header("Content-Length", str(len(resp)))
|
||||
self.end_headers()
|
||||
self.wfile.write(resp)
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def log_message(self, format: str, *args: object) -> None:
|
||||
# Suppress request logging to keep terminal clean
|
||||
pass
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate and serve eval review")
|
||||
parser.add_argument("workspace", type=Path, help="Path to workspace directory")
|
||||
parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
|
||||
parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
|
||||
parser.add_argument(
|
||||
"--previous-workspace", type=Path, default=None,
|
||||
help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--benchmark", type=Path, default=None,
|
||||
help="Path to benchmark.json to show in the Benchmark tab",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--static", "-s", type=Path, default=None,
|
||||
help="Write standalone HTML to this path instead of starting a server",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
workspace = args.workspace.resolve()
|
||||
if not workspace.is_dir():
|
||||
print(f"Error: {workspace} is not a directory", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
runs = find_runs(workspace)
|
||||
if not runs:
|
||||
print(f"No runs found in {workspace}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
skill_name = args.skill_name or workspace.name.replace("-workspace", "")
|
||||
feedback_path = workspace / "feedback.json"
|
||||
|
||||
previous: dict[str, dict] = {}
|
||||
if args.previous_workspace:
|
||||
previous = load_previous_iteration(args.previous_workspace.resolve())
|
||||
|
||||
benchmark_path = args.benchmark.resolve() if args.benchmark else None
|
||||
benchmark = None
|
||||
if benchmark_path and benchmark_path.exists():
|
||||
try:
|
||||
benchmark = json.loads(benchmark_path.read_text())
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
if args.static:
|
||||
html = generate_html(runs, skill_name, previous, benchmark)
|
||||
args.static.parent.mkdir(parents=True, exist_ok=True)
|
||||
args.static.write_text(html)
|
||||
print(f"\n Static viewer written to: {args.static}\n")
|
||||
sys.exit(0)
|
||||
|
||||
# Kill any existing process on the target port
|
||||
port = args.port
|
||||
_kill_port(port)
|
||||
handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
|
||||
try:
|
||||
server = HTTPServer(("127.0.0.1", port), handler)
|
||||
except OSError:
|
||||
# Port still in use after kill attempt — find a free one
|
||||
server = HTTPServer(("127.0.0.1", 0), handler)
|
||||
port = server.server_address[1]
|
||||
|
||||
url = f"http://localhost:{port}"
|
||||
print(f"\n Eval Viewer")
|
||||
print(f" ─────────────────────────────────")
|
||||
print(f" URL: {url}")
|
||||
print(f" Workspace: {workspace}")
|
||||
print(f" Feedback: {feedback_path}")
|
||||
if previous:
|
||||
print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
|
||||
if benchmark_path:
|
||||
print(f" Benchmark: {benchmark_path}")
|
||||
print(f"\n Press Ctrl+C to stop.\n")
|
||||
|
||||
webbrowser.open(url)
|
||||
|
||||
try:
|
||||
server.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\nStopped.")
|
||||
server.server_close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,430 @@
|
||||
# JSON Schemas
|
||||
|
||||
This document defines the JSON schemas used by skill-creator.
|
||||
|
||||
---
|
||||
|
||||
## evals.json
|
||||
|
||||
Defines the evals for a skill. Located at `evals/evals.json` within the skill directory.
|
||||
|
||||
```json
|
||||
{
|
||||
"skill_name": "example-skill",
|
||||
"evals": [
|
||||
{
|
||||
"id": 1,
|
||||
"prompt": "User's example prompt",
|
||||
"expected_output": "Description of expected result",
|
||||
"files": ["evals/files/sample1.pdf"],
|
||||
"expectations": [
|
||||
"The output includes X",
|
||||
"The skill used script Y"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `skill_name`: Name matching the skill's frontmatter
|
||||
- `evals[].id`: Unique integer identifier
|
||||
- `evals[].prompt`: The task to execute
|
||||
- `evals[].expected_output`: Human-readable description of success
|
||||
- `evals[].files`: Optional list of input file paths (relative to skill root)
|
||||
- `evals[].expectations`: List of verifiable statements
|
||||
|
||||
---
|
||||
|
||||
## history.json
|
||||
|
||||
Tracks version progression in Improve mode. Located at workspace root.
|
||||
|
||||
```json
|
||||
{
|
||||
"started_at": "2026-01-15T10:30:00Z",
|
||||
"skill_name": "pdf",
|
||||
"current_best": "v2",
|
||||
"iterations": [
|
||||
{
|
||||
"version": "v0",
|
||||
"parent": null,
|
||||
"expectation_pass_rate": 0.65,
|
||||
"grading_result": "baseline",
|
||||
"is_current_best": false
|
||||
},
|
||||
{
|
||||
"version": "v1",
|
||||
"parent": "v0",
|
||||
"expectation_pass_rate": 0.75,
|
||||
"grading_result": "won",
|
||||
"is_current_best": false
|
||||
},
|
||||
{
|
||||
"version": "v2",
|
||||
"parent": "v1",
|
||||
"expectation_pass_rate": 0.85,
|
||||
"grading_result": "won",
|
||||
"is_current_best": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `started_at`: ISO timestamp of when improvement started
|
||||
- `skill_name`: Name of the skill being improved
|
||||
- `current_best`: Version identifier of the best performer
|
||||
- `iterations[].version`: Version identifier (v0, v1, ...)
|
||||
- `iterations[].parent`: Parent version this was derived from
|
||||
- `iterations[].expectation_pass_rate`: Pass rate from grading
|
||||
- `iterations[].grading_result`: "baseline", "won", "lost", or "tie"
|
||||
- `iterations[].is_current_best`: Whether this is the current best version
|
||||
|
||||
---
|
||||
|
||||
## grading.json
|
||||
|
||||
Output from the grader agent. Located at `<run-dir>/grading.json`.
|
||||
|
||||
```json
|
||||
{
|
||||
"expectations": [
|
||||
{
|
||||
"text": "The output includes the name 'John Smith'",
|
||||
"passed": true,
|
||||
"evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
|
||||
},
|
||||
{
|
||||
"text": "The spreadsheet has a SUM formula in cell B10",
|
||||
"passed": false,
|
||||
"evidence": "No spreadsheet was created. The output was a text file."
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"passed": 2,
|
||||
"failed": 1,
|
||||
"total": 3,
|
||||
"pass_rate": 0.67
|
||||
},
|
||||
"execution_metrics": {
|
||||
"tool_calls": {
|
||||
"Read": 5,
|
||||
"Write": 2,
|
||||
"Bash": 8
|
||||
},
|
||||
"total_tool_calls": 15,
|
||||
"total_steps": 6,
|
||||
"errors_encountered": 0,
|
||||
"output_chars": 12450,
|
||||
"transcript_chars": 3200
|
||||
},
|
||||
"timing": {
|
||||
"executor_duration_seconds": 165.0,
|
||||
"grader_duration_seconds": 26.0,
|
||||
"total_duration_seconds": 191.0
|
||||
},
|
||||
"claims": [
|
||||
{
|
||||
"claim": "The form has 12 fillable fields",
|
||||
"type": "factual",
|
||||
"verified": true,
|
||||
"evidence": "Counted 12 fields in field_info.json"
|
||||
}
|
||||
],
|
||||
"user_notes_summary": {
|
||||
"uncertainties": ["Used 2023 data, may be stale"],
|
||||
"needs_review": [],
|
||||
"workarounds": ["Fell back to text overlay for non-fillable fields"]
|
||||
},
|
||||
"eval_feedback": {
|
||||
"suggestions": [
|
||||
{
|
||||
"assertion": "The output includes the name 'John Smith'",
|
||||
"reason": "A hallucinated document that mentions the name would also pass"
|
||||
}
|
||||
],
|
||||
"overall": "Assertions check presence but not correctness."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `expectations[]`: Graded expectations with evidence
|
||||
- `summary`: Aggregate pass/fail counts
|
||||
- `execution_metrics`: Tool usage and output size (from executor's metrics.json)
|
||||
- `timing`: Wall clock timing (from timing.json)
|
||||
- `claims`: Extracted and verified claims from the output
|
||||
- `user_notes_summary`: Issues flagged by the executor
|
||||
- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising
|
||||
|
||||
---
|
||||
|
||||
## metrics.json
|
||||
|
||||
Output from the executor agent. Located at `<run-dir>/outputs/metrics.json`.
|
||||
|
||||
```json
|
||||
{
|
||||
"tool_calls": {
|
||||
"Read": 5,
|
||||
"Write": 2,
|
||||
"Bash": 8,
|
||||
"Edit": 1,
|
||||
"Glob": 2,
|
||||
"Grep": 0
|
||||
},
|
||||
"total_tool_calls": 18,
|
||||
"total_steps": 6,
|
||||
"files_created": ["filled_form.pdf", "field_values.json"],
|
||||
"errors_encountered": 0,
|
||||
"output_chars": 12450,
|
||||
"transcript_chars": 3200
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `tool_calls`: Count per tool type
|
||||
- `total_tool_calls`: Sum of all tool calls
|
||||
- `total_steps`: Number of major execution steps
|
||||
- `files_created`: List of output files created
|
||||
- `errors_encountered`: Number of errors during execution
|
||||
- `output_chars`: Total character count of output files
|
||||
- `transcript_chars`: Character count of transcript
|
||||
|
||||
---
|
||||
|
||||
## timing.json
|
||||
|
||||
Wall clock timing for a run. Located at `<run-dir>/timing.json`.
|
||||
|
||||
**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
|
||||
|
||||
```json
|
||||
{
|
||||
"total_tokens": 84852,
|
||||
"duration_ms": 23332,
|
||||
"total_duration_seconds": 23.3,
|
||||
"executor_start": "2026-01-15T10:30:00Z",
|
||||
"executor_end": "2026-01-15T10:32:45Z",
|
||||
"executor_duration_seconds": 165.0,
|
||||
"grader_start": "2026-01-15T10:32:46Z",
|
||||
"grader_end": "2026-01-15T10:33:12Z",
|
||||
"grader_duration_seconds": 26.0
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## benchmark.json
|
||||
|
||||
Output from Benchmark mode. Located at `benchmarks/<timestamp>/benchmark.json`.
|
||||
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"skill_name": "pdf",
|
||||
"skill_path": "/path/to/pdf",
|
||||
"executor_model": "claude-sonnet-4-20250514",
|
||||
"analyzer_model": "most-capable-model",
|
||||
"timestamp": "2026-01-15T10:30:00Z",
|
||||
"evals_run": [1, 2, 3],
|
||||
"runs_per_configuration": 3
|
||||
},
|
||||
|
||||
"runs": [
|
||||
{
|
||||
"eval_id": 1,
|
||||
"eval_name": "Ocean",
|
||||
"configuration": "with_skill",
|
||||
"run_number": 1,
|
||||
"result": {
|
||||
"pass_rate": 0.85,
|
||||
"passed": 6,
|
||||
"failed": 1,
|
||||
"total": 7,
|
||||
"time_seconds": 42.5,
|
||||
"tokens": 3800,
|
||||
"tool_calls": 18,
|
||||
"errors": 0
|
||||
},
|
||||
"expectations": [
|
||||
{"text": "...", "passed": true, "evidence": "..."}
|
||||
],
|
||||
"notes": [
|
||||
"Used 2023 data, may be stale",
|
||||
"Fell back to text overlay for non-fillable fields"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
"run_summary": {
|
||||
"with_skill": {
|
||||
"pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90},
|
||||
"time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0},
|
||||
"tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100}
|
||||
},
|
||||
"without_skill": {
|
||||
"pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45},
|
||||
"time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0},
|
||||
"tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500}
|
||||
},
|
||||
"delta": {
|
||||
"pass_rate": "+0.50",
|
||||
"time_seconds": "+13.0",
|
||||
"tokens": "+1700"
|
||||
}
|
||||
},
|
||||
|
||||
"notes": [
|
||||
"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
|
||||
"Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent",
|
||||
"Without-skill runs consistently fail on table extraction expectations",
|
||||
"Skill adds 13s average execution time but improves pass rate by 50%"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `metadata`: Information about the benchmark run
|
||||
- `skill_name`: Name of the skill
|
||||
- `timestamp`: When the benchmark was run
|
||||
- `evals_run`: List of eval names or IDs
|
||||
- `runs_per_configuration`: Number of runs per config (e.g. 3)
|
||||
- `runs[]`: Individual run results
|
||||
- `eval_id`: Numeric eval identifier
|
||||
- `eval_name`: Human-readable eval name (used as section header in the viewer)
|
||||
- `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding)
|
||||
- `run_number`: Integer run number (1, 2, 3...)
|
||||
- `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors`
|
||||
- `run_summary`: Statistical aggregates per configuration
|
||||
- `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
|
||||
- `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
|
||||
- `notes`: Freeform observations from the analyzer
|
||||
|
||||
**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
|
||||
|
||||
---
|
||||
|
||||
## comparison.json
|
||||
|
||||
Output from blind comparator. Located at `<grading-dir>/comparison-N.json`.
|
||||
|
||||
```json
|
||||
{
|
||||
"winner": "A",
|
||||
"reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
|
||||
"rubric": {
|
||||
"A": {
|
||||
"content": {
|
||||
"correctness": 5,
|
||||
"completeness": 5,
|
||||
"accuracy": 4
|
||||
},
|
||||
"structure": {
|
||||
"organization": 4,
|
||||
"formatting": 5,
|
||||
"usability": 4
|
||||
},
|
||||
"content_score": 4.7,
|
||||
"structure_score": 4.3,
|
||||
"overall_score": 9.0
|
||||
},
|
||||
"B": {
|
||||
"content": {
|
||||
"correctness": 3,
|
||||
"completeness": 2,
|
||||
"accuracy": 3
|
||||
},
|
||||
"structure": {
|
||||
"organization": 3,
|
||||
"formatting": 2,
|
||||
"usability": 3
|
||||
},
|
||||
"content_score": 2.7,
|
||||
"structure_score": 2.7,
|
||||
"overall_score": 5.4
|
||||
}
|
||||
},
|
||||
"output_quality": {
|
||||
"A": {
|
||||
"score": 9,
|
||||
"strengths": ["Complete solution", "Well-formatted", "All fields present"],
|
||||
"weaknesses": ["Minor style inconsistency in header"]
|
||||
},
|
||||
"B": {
|
||||
"score": 5,
|
||||
"strengths": ["Readable output", "Correct basic structure"],
|
||||
"weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
|
||||
}
|
||||
},
|
||||
"expectation_results": {
|
||||
"A": {
|
||||
"passed": 4,
|
||||
"total": 5,
|
||||
"pass_rate": 0.80,
|
||||
"details": [
|
||||
{"text": "Output includes name", "passed": true}
|
||||
]
|
||||
},
|
||||
"B": {
|
||||
"passed": 3,
|
||||
"total": 5,
|
||||
"pass_rate": 0.60,
|
||||
"details": [
|
||||
{"text": "Output includes name", "passed": true}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## analysis.json
|
||||
|
||||
Output from post-hoc analyzer. Located at `<grading-dir>/analysis.json`.
|
||||
|
||||
```json
|
||||
{
|
||||
"comparison_summary": {
|
||||
"winner": "A",
|
||||
"winner_skill": "path/to/winner/skill",
|
||||
"loser_skill": "path/to/loser/skill",
|
||||
"comparator_reasoning": "Brief summary of why comparator chose winner"
|
||||
},
|
||||
"winner_strengths": [
|
||||
"Clear step-by-step instructions for handling multi-page documents",
|
||||
"Included validation script that caught formatting errors"
|
||||
],
|
||||
"loser_weaknesses": [
|
||||
"Vague instruction 'process the document appropriately' led to inconsistent behavior",
|
||||
"No script for validation, agent had to improvise"
|
||||
],
|
||||
"instruction_following": {
|
||||
"winner": {
|
||||
"score": 9,
|
||||
"issues": ["Minor: skipped optional logging step"]
|
||||
},
|
||||
"loser": {
|
||||
"score": 6,
|
||||
"issues": [
|
||||
"Did not use the skill's formatting template",
|
||||
"Invented own approach instead of following step 3"
|
||||
]
|
||||
}
|
||||
},
|
||||
"improvement_suggestions": [
|
||||
{
|
||||
"priority": "high",
|
||||
"category": "instructions",
|
||||
"suggestion": "Replace 'process the document appropriately' with explicit steps",
|
||||
"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
|
||||
}
|
||||
],
|
||||
"transcript_insights": {
|
||||
"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script",
|
||||
"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods"
|
||||
}
|
||||
}
|
||||
```
|
||||
+401
@@ -0,0 +1,401 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Aggregate individual run results into benchmark summary statistics.
|
||||
|
||||
Reads grading.json files from run directories and produces:
|
||||
- run_summary with mean, stddev, min, max for each metric
|
||||
- delta between with_skill and without_skill configurations
|
||||
|
||||
Usage:
|
||||
python aggregate_benchmark.py <benchmark_dir>
|
||||
|
||||
Example:
|
||||
python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/
|
||||
|
||||
The script supports two directory layouts:
|
||||
|
||||
Workspace layout (from skill-creator iterations):
|
||||
<benchmark_dir>/
|
||||
└── eval-N/
|
||||
├── with_skill/
|
||||
│ ├── run-1/grading.json
|
||||
│ └── run-2/grading.json
|
||||
└── without_skill/
|
||||
├── run-1/grading.json
|
||||
└── run-2/grading.json
|
||||
|
||||
Legacy layout (with runs/ subdirectory):
|
||||
<benchmark_dir>/
|
||||
└── runs/
|
||||
└── eval-N/
|
||||
├── with_skill/
|
||||
│ └── run-1/grading.json
|
||||
└── without_skill/
|
||||
└── run-1/grading.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import math
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def calculate_stats(values: list[float]) -> dict:
|
||||
"""Calculate mean, stddev, min, max for a list of values."""
|
||||
if not values:
|
||||
return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}
|
||||
|
||||
n = len(values)
|
||||
mean = sum(values) / n
|
||||
|
||||
if n > 1:
|
||||
variance = sum((x - mean) ** 2 for x in values) / (n - 1)
|
||||
stddev = math.sqrt(variance)
|
||||
else:
|
||||
stddev = 0.0
|
||||
|
||||
return {
|
||||
"mean": round(mean, 4),
|
||||
"stddev": round(stddev, 4),
|
||||
"min": round(min(values), 4),
|
||||
"max": round(max(values), 4)
|
||||
}
|
||||
|
||||
|
||||
def load_run_results(benchmark_dir: Path) -> dict:
|
||||
"""
|
||||
Load all run results from a benchmark directory.
|
||||
|
||||
Returns dict keyed by config name (e.g. "with_skill"/"without_skill",
|
||||
or "new_skill"/"old_skill"), each containing a list of run results.
|
||||
"""
|
||||
# Support both layouts: eval dirs directly under benchmark_dir, or under runs/
|
||||
runs_dir = benchmark_dir / "runs"
|
||||
if runs_dir.exists():
|
||||
search_dir = runs_dir
|
||||
elif list(benchmark_dir.glob("eval-*")):
|
||||
search_dir = benchmark_dir
|
||||
else:
|
||||
print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}")
|
||||
return {}
|
||||
|
||||
results: dict[str, list] = {}
|
||||
|
||||
for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))):
|
||||
metadata_path = eval_dir / "eval_metadata.json"
|
||||
if metadata_path.exists():
|
||||
try:
|
||||
with open(metadata_path) as mf:
|
||||
eval_id = json.load(mf).get("eval_id", eval_idx)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
eval_id = eval_idx
|
||||
else:
|
||||
try:
|
||||
eval_id = int(eval_dir.name.split("-")[1])
|
||||
except ValueError:
|
||||
eval_id = eval_idx
|
||||
|
||||
# Discover config directories dynamically rather than hardcoding names
|
||||
for config_dir in sorted(eval_dir.iterdir()):
|
||||
if not config_dir.is_dir():
|
||||
continue
|
||||
# Skip non-config directories (inputs, outputs, etc.)
|
||||
if not list(config_dir.glob("run-*")):
|
||||
continue
|
||||
config = config_dir.name
|
||||
if config not in results:
|
||||
results[config] = []
|
||||
|
||||
for run_dir in sorted(config_dir.glob("run-*")):
|
||||
run_number = int(run_dir.name.split("-")[1])
|
||||
grading_file = run_dir / "grading.json"
|
||||
|
||||
if not grading_file.exists():
|
||||
print(f"Warning: grading.json not found in {run_dir}")
|
||||
continue
|
||||
|
||||
try:
|
||||
with open(grading_file) as f:
|
||||
grading = json.load(f)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Warning: Invalid JSON in {grading_file}: {e}")
|
||||
continue
|
||||
|
||||
# Extract metrics
|
||||
result = {
|
||||
"eval_id": eval_id,
|
||||
"run_number": run_number,
|
||||
"pass_rate": grading.get("summary", {}).get("pass_rate", 0.0),
|
||||
"passed": grading.get("summary", {}).get("passed", 0),
|
||||
"failed": grading.get("summary", {}).get("failed", 0),
|
||||
"total": grading.get("summary", {}).get("total", 0),
|
||||
}
|
||||
|
||||
# Extract timing — check grading.json first, then sibling timing.json
|
||||
timing = grading.get("timing", {})
|
||||
result["time_seconds"] = timing.get("total_duration_seconds", 0.0)
|
||||
timing_file = run_dir / "timing.json"
|
||||
if result["time_seconds"] == 0.0 and timing_file.exists():
|
||||
try:
|
||||
with open(timing_file) as tf:
|
||||
timing_data = json.load(tf)
|
||||
result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0)
|
||||
result["tokens"] = timing_data.get("total_tokens", 0)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# Extract metrics if available
|
||||
metrics = grading.get("execution_metrics", {})
|
||||
result["tool_calls"] = metrics.get("total_tool_calls", 0)
|
||||
if not result.get("tokens"):
|
||||
result["tokens"] = metrics.get("output_chars", 0)
|
||||
result["errors"] = metrics.get("errors_encountered", 0)
|
||||
|
||||
# Extract expectations — viewer requires fields: text, passed, evidence
|
||||
raw_expectations = grading.get("expectations", [])
|
||||
for exp in raw_expectations:
|
||||
if "text" not in exp or "passed" not in exp:
|
||||
print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}")
|
||||
result["expectations"] = raw_expectations
|
||||
|
||||
# Extract notes from user_notes_summary
|
||||
notes_summary = grading.get("user_notes_summary", {})
|
||||
notes = []
|
||||
notes.extend(notes_summary.get("uncertainties", []))
|
||||
notes.extend(notes_summary.get("needs_review", []))
|
||||
notes.extend(notes_summary.get("workarounds", []))
|
||||
result["notes"] = notes
|
||||
|
||||
results[config].append(result)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def aggregate_results(results: dict) -> dict:
|
||||
"""
|
||||
Aggregate run results into summary statistics.
|
||||
|
||||
Returns run_summary with stats for each configuration and delta.
|
||||
"""
|
||||
run_summary = {}
|
||||
configs = list(results.keys())
|
||||
|
||||
for config in configs:
|
||||
runs = results.get(config, [])
|
||||
|
||||
if not runs:
|
||||
run_summary[config] = {
|
||||
"pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
|
||||
"time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
|
||||
"tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0}
|
||||
}
|
||||
continue
|
||||
|
||||
pass_rates = [r["pass_rate"] for r in runs]
|
||||
times = [r["time_seconds"] for r in runs]
|
||||
tokens = [r.get("tokens", 0) for r in runs]
|
||||
|
||||
run_summary[config] = {
|
||||
"pass_rate": calculate_stats(pass_rates),
|
||||
"time_seconds": calculate_stats(times),
|
||||
"tokens": calculate_stats(tokens)
|
||||
}
|
||||
|
||||
# Calculate delta between the first two configs (if two exist)
|
||||
if len(configs) >= 2:
|
||||
primary = run_summary.get(configs[0], {})
|
||||
baseline = run_summary.get(configs[1], {})
|
||||
else:
|
||||
primary = run_summary.get(configs[0], {}) if configs else {}
|
||||
baseline = {}
|
||||
|
||||
delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0)
|
||||
delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0)
|
||||
delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0)
|
||||
|
||||
run_summary["delta"] = {
|
||||
"pass_rate": f"{delta_pass_rate:+.2f}",
|
||||
"time_seconds": f"{delta_time:+.1f}",
|
||||
"tokens": f"{delta_tokens:+.0f}"
|
||||
}
|
||||
|
||||
return run_summary
|
||||
|
||||
|
||||
def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict:
|
||||
"""
|
||||
Generate complete benchmark.json from run results.
|
||||
"""
|
||||
results = load_run_results(benchmark_dir)
|
||||
run_summary = aggregate_results(results)
|
||||
|
||||
# Build runs array for benchmark.json
|
||||
runs = []
|
||||
for config in results:
|
||||
for result in results[config]:
|
||||
runs.append({
|
||||
"eval_id": result["eval_id"],
|
||||
"configuration": config,
|
||||
"run_number": result["run_number"],
|
||||
"result": {
|
||||
"pass_rate": result["pass_rate"],
|
||||
"passed": result["passed"],
|
||||
"failed": result["failed"],
|
||||
"total": result["total"],
|
||||
"time_seconds": result["time_seconds"],
|
||||
"tokens": result.get("tokens", 0),
|
||||
"tool_calls": result.get("tool_calls", 0),
|
||||
"errors": result.get("errors", 0)
|
||||
},
|
||||
"expectations": result["expectations"],
|
||||
"notes": result["notes"]
|
||||
})
|
||||
|
||||
# Determine eval IDs from results
|
||||
eval_ids = sorted(set(
|
||||
r["eval_id"]
|
||||
for config in results.values()
|
||||
for r in config
|
||||
))
|
||||
|
||||
benchmark = {
|
||||
"metadata": {
|
||||
"skill_name": skill_name or "<skill-name>",
|
||||
"skill_path": skill_path or "<path/to/skill>",
|
||||
"executor_model": "<model-name>",
|
||||
"analyzer_model": "<model-name>",
|
||||
"timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
|
||||
"evals_run": eval_ids,
|
||||
"runs_per_configuration": 3
|
||||
},
|
||||
"runs": runs,
|
||||
"run_summary": run_summary,
|
||||
"notes": [] # To be filled by analyzer
|
||||
}
|
||||
|
||||
return benchmark
|
||||
|
||||
|
||||
def generate_markdown(benchmark: dict) -> str:
|
||||
"""Generate human-readable benchmark.md from benchmark data."""
|
||||
metadata = benchmark["metadata"]
|
||||
run_summary = benchmark["run_summary"]
|
||||
|
||||
# Determine config names (excluding "delta")
|
||||
configs = [k for k in run_summary if k != "delta"]
|
||||
config_a = configs[0] if len(configs) >= 1 else "config_a"
|
||||
config_b = configs[1] if len(configs) >= 2 else "config_b"
|
||||
label_a = config_a.replace("_", " ").title()
|
||||
label_b = config_b.replace("_", " ").title()
|
||||
|
||||
lines = [
|
||||
f"# Skill Benchmark: {metadata['skill_name']}",
|
||||
"",
|
||||
f"**Model**: {metadata['executor_model']}",
|
||||
f"**Date**: {metadata['timestamp']}",
|
||||
f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)",
|
||||
"",
|
||||
"## Summary",
|
||||
"",
|
||||
f"| Metric | {label_a} | {label_b} | Delta |",
|
||||
"|--------|------------|---------------|-------|",
|
||||
]
|
||||
|
||||
a_summary = run_summary.get(config_a, {})
|
||||
b_summary = run_summary.get(config_b, {})
|
||||
delta = run_summary.get("delta", {})
|
||||
|
||||
# Format pass rate
|
||||
a_pr = a_summary.get("pass_rate", {})
|
||||
b_pr = b_summary.get("pass_rate", {})
|
||||
lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '—')} |")
|
||||
|
||||
# Format time
|
||||
a_time = a_summary.get("time_seconds", {})
|
||||
b_time = b_summary.get("time_seconds", {})
|
||||
lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '—')}s |")
|
||||
|
||||
# Format tokens
|
||||
a_tokens = a_summary.get("tokens", {})
|
||||
b_tokens = b_summary.get("tokens", {})
|
||||
lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '—')} |")
|
||||
|
||||
# Notes section
|
||||
if benchmark.get("notes"):
|
||||
lines.extend([
|
||||
"",
|
||||
"## Notes",
|
||||
""
|
||||
])
|
||||
for note in benchmark["notes"]:
|
||||
lines.append(f"- {note}")
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Aggregate benchmark run results into summary statistics"
|
||||
)
|
||||
parser.add_argument(
|
||||
"benchmark_dir",
|
||||
type=Path,
|
||||
help="Path to the benchmark directory"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skill-name",
|
||||
default="",
|
||||
help="Name of the skill being benchmarked"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skill-path",
|
||||
default="",
|
||||
help="Path to the skill being benchmarked"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output", "-o",
|
||||
type=Path,
|
||||
help="Output path for benchmark.json (default: <benchmark_dir>/benchmark.json)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.benchmark_dir.exists():
|
||||
print(f"Directory not found: {args.benchmark_dir}")
|
||||
sys.exit(1)
|
||||
|
||||
# Generate benchmark
|
||||
benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path)
|
||||
|
||||
# Determine output paths
|
||||
output_json = args.output or (args.benchmark_dir / "benchmark.json")
|
||||
output_md = output_json.with_suffix(".md")
|
||||
|
||||
# Write benchmark.json
|
||||
with open(output_json, "w") as f:
|
||||
json.dump(benchmark, f, indent=2)
|
||||
print(f"Generated: {output_json}")
|
||||
|
||||
# Write benchmark.md
|
||||
markdown = generate_markdown(benchmark)
|
||||
with open(output_md, "w") as f:
|
||||
f.write(markdown)
|
||||
print(f"Generated: {output_md}")
|
||||
|
||||
# Print summary
|
||||
run_summary = benchmark["run_summary"]
|
||||
configs = [k for k in run_summary if k != "delta"]
|
||||
delta = run_summary.get("delta", {})
|
||||
|
||||
print(f"\nSummary:")
|
||||
for config in configs:
|
||||
pr = run_summary[config]["pass_rate"]["mean"]
|
||||
label = config.replace("_", " ").title()
|
||||
print(f" {label}: {pr*100:.1f}% pass rate")
|
||||
print(f" Delta: {delta.get('pass_rate', '—')}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+326
@@ -0,0 +1,326 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate an HTML report from run_loop.py output.
|
||||
|
||||
Takes the JSON output from run_loop.py and generates a visual HTML report
|
||||
showing each description attempt with check/x for each test case.
|
||||
Distinguishes between train and test queries.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import html
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str:
|
||||
"""Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag."""
|
||||
history = data.get("history", [])
|
||||
holdout = data.get("holdout", 0)
|
||||
title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else ""
|
||||
|
||||
# Get all unique queries from train and test sets, with should_trigger info
|
||||
train_queries: list[dict] = []
|
||||
test_queries: list[dict] = []
|
||||
if history:
|
||||
for r in history[0].get("train_results", history[0].get("results", [])):
|
||||
train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
|
||||
if history[0].get("test_results"):
|
||||
for r in history[0].get("test_results", []):
|
||||
test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
|
||||
|
||||
refresh_tag = ' <meta http-equiv="refresh" content="5">\n' if auto_refresh else ""
|
||||
|
||||
html_parts = ["""<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
""" + refresh_tag + """ <title>""" + title_prefix + """Skill Description Optimization</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||
<style>
|
||||
body {
|
||||
font-family: 'Lora', Georgia, serif;
|
||||
max-width: 100%;
|
||||
margin: 0 auto;
|
||||
padding: 20px;
|
||||
background: #faf9f5;
|
||||
color: #141413;
|
||||
}
|
||||
h1 { font-family: 'Poppins', sans-serif; color: #141413; }
|
||||
.explainer {
|
||||
background: white;
|
||||
padding: 15px;
|
||||
border-radius: 6px;
|
||||
margin-bottom: 20px;
|
||||
border: 1px solid #e8e6dc;
|
||||
color: #b0aea5;
|
||||
font-size: 0.875rem;
|
||||
line-height: 1.6;
|
||||
}
|
||||
.summary {
|
||||
background: white;
|
||||
padding: 15px;
|
||||
border-radius: 6px;
|
||||
margin-bottom: 20px;
|
||||
border: 1px solid #e8e6dc;
|
||||
}
|
||||
.summary p { margin: 5px 0; }
|
||||
.best { color: #788c5d; font-weight: bold; }
|
||||
.table-container {
|
||||
overflow-x: auto;
|
||||
width: 100%;
|
||||
}
|
||||
table {
|
||||
border-collapse: collapse;
|
||||
background: white;
|
||||
border: 1px solid #e8e6dc;
|
||||
border-radius: 6px;
|
||||
font-size: 12px;
|
||||
min-width: 100%;
|
||||
}
|
||||
th, td {
|
||||
padding: 8px;
|
||||
text-align: left;
|
||||
border: 1px solid #e8e6dc;
|
||||
white-space: normal;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
th {
|
||||
font-family: 'Poppins', sans-serif;
|
||||
background: #141413;
|
||||
color: #faf9f5;
|
||||
font-weight: 500;
|
||||
}
|
||||
th.test-col {
|
||||
background: #6a9bcc;
|
||||
}
|
||||
th.query-col { min-width: 200px; }
|
||||
td.description {
|
||||
font-family: monospace;
|
||||
font-size: 11px;
|
||||
word-wrap: break-word;
|
||||
max-width: 400px;
|
||||
}
|
||||
td.result {
|
||||
text-align: center;
|
||||
font-size: 16px;
|
||||
min-width: 40px;
|
||||
}
|
||||
td.test-result {
|
||||
background: #f0f6fc;
|
||||
}
|
||||
.pass { color: #788c5d; }
|
||||
.fail { color: #c44; }
|
||||
.rate {
|
||||
font-size: 9px;
|
||||
color: #b0aea5;
|
||||
display: block;
|
||||
}
|
||||
tr:hover { background: #faf9f5; }
|
||||
.score {
|
||||
display: inline-block;
|
||||
padding: 2px 6px;
|
||||
border-radius: 4px;
|
||||
font-weight: bold;
|
||||
font-size: 11px;
|
||||
}
|
||||
.score-good { background: #eef2e8; color: #788c5d; }
|
||||
.score-ok { background: #fef3c7; color: #d97706; }
|
||||
.score-bad { background: #fceaea; color: #c44; }
|
||||
.train-label { color: #b0aea5; font-size: 10px; }
|
||||
.test-label { color: #6a9bcc; font-size: 10px; font-weight: bold; }
|
||||
.best-row { background: #f5f8f2; }
|
||||
th.positive-col { border-bottom: 3px solid #788c5d; }
|
||||
th.negative-col { border-bottom: 3px solid #c44; }
|
||||
th.test-col.positive-col { border-bottom: 3px solid #788c5d; }
|
||||
th.test-col.negative-col { border-bottom: 3px solid #c44; }
|
||||
.legend { font-family: 'Poppins', sans-serif; display: flex; gap: 20px; margin-bottom: 10px; font-size: 13px; align-items: center; }
|
||||
.legend-item { display: flex; align-items: center; gap: 6px; }
|
||||
.legend-swatch { width: 16px; height: 16px; border-radius: 3px; display: inline-block; }
|
||||
.swatch-positive { background: #141413; border-bottom: 3px solid #788c5d; }
|
||||
.swatch-negative { background: #141413; border-bottom: 3px solid #c44; }
|
||||
.swatch-test { background: #6a9bcc; }
|
||||
.swatch-train { background: #141413; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>""" + title_prefix + """Skill Description Optimization</h1>
|
||||
<div class="explainer">
|
||||
<strong>Optimizing your skill's description.</strong> This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill.
|
||||
</div>
|
||||
"""]
|
||||
|
||||
# Summary section
|
||||
best_test_score = data.get('best_test_score')
|
||||
best_train_score = data.get('best_train_score')
|
||||
html_parts.append(f"""
|
||||
<div class="summary">
|
||||
<p><strong>Original:</strong> {html.escape(data.get('original_description', 'N/A'))}</p>
|
||||
<p class="best"><strong>Best:</strong> {html.escape(data.get('best_description', 'N/A'))}</p>
|
||||
<p><strong>Best Score:</strong> {data.get('best_score', 'N/A')} {'(test)' if best_test_score else '(train)'}</p>
|
||||
<p><strong>Iterations:</strong> {data.get('iterations_run', 0)} | <strong>Train:</strong> {data.get('train_size', '?')} | <strong>Test:</strong> {data.get('test_size', '?')}</p>
|
||||
</div>
|
||||
""")
|
||||
|
||||
# Legend
|
||||
html_parts.append("""
|
||||
<div class="legend">
|
||||
<span style="font-weight:600">Query columns:</span>
|
||||
<span class="legend-item"><span class="legend-swatch swatch-positive"></span> Should trigger</span>
|
||||
<span class="legend-item"><span class="legend-swatch swatch-negative"></span> Should NOT trigger</span>
|
||||
<span class="legend-item"><span class="legend-swatch swatch-train"></span> Train</span>
|
||||
<span class="legend-item"><span class="legend-swatch swatch-test"></span> Test</span>
|
||||
</div>
|
||||
""")
|
||||
|
||||
# Table header
|
||||
html_parts.append("""
|
||||
<div class="table-container">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Iter</th>
|
||||
<th>Train</th>
|
||||
<th>Test</th>
|
||||
<th class="query-col">Description</th>
|
||||
""")
|
||||
|
||||
# Add column headers for train queries
|
||||
for qinfo in train_queries:
|
||||
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
|
||||
html_parts.append(f' <th class="{polarity}">{html.escape(qinfo["query"])}</th>\n')
|
||||
|
||||
# Add column headers for test queries (different color)
|
||||
for qinfo in test_queries:
|
||||
polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
|
||||
html_parts.append(f' <th class="test-col {polarity}">{html.escape(qinfo["query"])}</th>\n')
|
||||
|
||||
html_parts.append(""" </tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
""")
|
||||
|
||||
# Find best iteration for highlighting
|
||||
if test_queries:
|
||||
best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration")
|
||||
else:
|
||||
best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration")
|
||||
|
||||
# Add rows for each iteration
|
||||
for h in history:
|
||||
iteration = h.get("iteration", "?")
|
||||
train_passed = h.get("train_passed", h.get("passed", 0))
|
||||
train_total = h.get("train_total", h.get("total", 0))
|
||||
test_passed = h.get("test_passed")
|
||||
test_total = h.get("test_total")
|
||||
description = h.get("description", "")
|
||||
train_results = h.get("train_results", h.get("results", []))
|
||||
test_results = h.get("test_results", [])
|
||||
|
||||
# Create lookups for results by query
|
||||
train_by_query = {r["query"]: r for r in train_results}
|
||||
test_by_query = {r["query"]: r for r in test_results} if test_results else {}
|
||||
|
||||
# Compute aggregate correct/total runs across all retries
|
||||
def aggregate_runs(results: list[dict]) -> tuple[int, int]:
|
||||
correct = 0
|
||||
total = 0
|
||||
for r in results:
|
||||
runs = r.get("runs", 0)
|
||||
triggers = r.get("triggers", 0)
|
||||
total += runs
|
||||
if r.get("should_trigger", True):
|
||||
correct += triggers
|
||||
else:
|
||||
correct += runs - triggers
|
||||
return correct, total
|
||||
|
||||
train_correct, train_runs = aggregate_runs(train_results)
|
||||
test_correct, test_runs = aggregate_runs(test_results)
|
||||
|
||||
# Determine score classes
|
||||
def score_class(correct: int, total: int) -> str:
|
||||
if total > 0:
|
||||
ratio = correct / total
|
||||
if ratio >= 0.8:
|
||||
return "score-good"
|
||||
elif ratio >= 0.5:
|
||||
return "score-ok"
|
||||
return "score-bad"
|
||||
|
||||
train_class = score_class(train_correct, train_runs)
|
||||
test_class = score_class(test_correct, test_runs)
|
||||
|
||||
row_class = "best-row" if iteration == best_iter else ""
|
||||
|
||||
html_parts.append(f""" <tr class="{row_class}">
|
||||
<td>{iteration}</td>
|
||||
<td><span class="score {train_class}">{train_correct}/{train_runs}</span></td>
|
||||
<td><span class="score {test_class}">{test_correct}/{test_runs}</span></td>
|
||||
<td class="description">{html.escape(description)}</td>
|
||||
""")
|
||||
|
||||
# Add result for each train query
|
||||
for qinfo in train_queries:
|
||||
r = train_by_query.get(qinfo["query"], {})
|
||||
did_pass = r.get("pass", False)
|
||||
triggers = r.get("triggers", 0)
|
||||
runs = r.get("runs", 0)
|
||||
|
||||
icon = "✓" if did_pass else "✗"
|
||||
css_class = "pass" if did_pass else "fail"
|
||||
|
||||
html_parts.append(f' <td class="result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
|
||||
|
||||
# Add result for each test query (with different background)
|
||||
for qinfo in test_queries:
|
||||
r = test_by_query.get(qinfo["query"], {})
|
||||
did_pass = r.get("pass", False)
|
||||
triggers = r.get("triggers", 0)
|
||||
runs = r.get("runs", 0)
|
||||
|
||||
icon = "✓" if did_pass else "✗"
|
||||
css_class = "pass" if did_pass else "fail"
|
||||
|
||||
html_parts.append(f' <td class="result test-result {css_class}">{icon}<span class="rate">{triggers}/{runs}</span></td>\n')
|
||||
|
||||
html_parts.append(" </tr>\n")
|
||||
|
||||
html_parts.append(""" </tbody>
|
||||
</table>
|
||||
</div>
|
||||
""")
|
||||
|
||||
html_parts.append("""
|
||||
</body>
|
||||
</html>
|
||||
""")
|
||||
|
||||
return "".join(html_parts)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output")
|
||||
parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)")
|
||||
parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)")
|
||||
parser.add_argument("--skill-name", default="", help="Skill name to include in the report title")
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.input == "-":
|
||||
data = json.load(sys.stdin)
|
||||
else:
|
||||
data = json.loads(Path(args.input).read_text())
|
||||
|
||||
html_output = generate_html(data, skill_name=args.skill_name)
|
||||
|
||||
if args.output:
|
||||
Path(args.output).write_text(html_output)
|
||||
print(f"Report written to {args.output}", file=sys.stderr)
|
||||
else:
|
||||
print(html_output)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+247
@@ -0,0 +1,247 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Improve a skill description based on eval results.
|
||||
|
||||
Takes eval results (from run_eval.py) and generates an improved description
|
||||
by calling `claude -p` as a subprocess (same auth pattern as run_eval.py —
|
||||
uses the session's Claude Code auth, no separate ANTHROPIC_API_KEY needed).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from scripts.utils import parse_skill_md
|
||||
|
||||
|
||||
def _call_claude(prompt: str, model: str | None, timeout: int = 300) -> str:
|
||||
"""Run `claude -p` with the prompt on stdin and return the text response.
|
||||
|
||||
Prompt goes over stdin (not argv) because it embeds the full SKILL.md
|
||||
body and can easily exceed comfortable argv length.
|
||||
"""
|
||||
cmd = ["claude", "-p", "--output-format", "text"]
|
||||
if model:
|
||||
cmd.extend(["--model", model])
|
||||
|
||||
# Remove CLAUDECODE env var to allow nesting claude -p inside a
|
||||
# Claude Code session. The guard is for interactive terminal conflicts;
|
||||
# programmatic subprocess usage is safe. Same pattern as run_eval.py.
|
||||
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
|
||||
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
input=prompt,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
env=env,
|
||||
timeout=timeout,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
raise RuntimeError(
|
||||
f"claude -p exited {result.returncode}\nstderr: {result.stderr}"
|
||||
)
|
||||
return result.stdout
|
||||
|
||||
|
||||
def improve_description(
|
||||
skill_name: str,
|
||||
skill_content: str,
|
||||
current_description: str,
|
||||
eval_results: dict,
|
||||
history: list[dict],
|
||||
model: str,
|
||||
test_results: dict | None = None,
|
||||
log_dir: Path | None = None,
|
||||
iteration: int | None = None,
|
||||
) -> str:
|
||||
"""Call Claude to improve the description based on eval results."""
|
||||
failed_triggers = [
|
||||
r for r in eval_results["results"]
|
||||
if r["should_trigger"] and not r["pass"]
|
||||
]
|
||||
false_triggers = [
|
||||
r for r in eval_results["results"]
|
||||
if not r["should_trigger"] and not r["pass"]
|
||||
]
|
||||
|
||||
# Build scores summary
|
||||
train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}"
|
||||
if test_results:
|
||||
test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}"
|
||||
scores_summary = f"Train: {train_score}, Test: {test_score}"
|
||||
else:
|
||||
scores_summary = f"Train: {train_score}"
|
||||
|
||||
prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
|
||||
|
||||
The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones.
|
||||
|
||||
Here's the current description:
|
||||
<current_description>
|
||||
"{current_description}"
|
||||
</current_description>
|
||||
|
||||
Current scores ({scores_summary}):
|
||||
<scores_summary>
|
||||
"""
|
||||
if failed_triggers:
|
||||
prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n"
|
||||
for r in failed_triggers:
|
||||
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
|
||||
prompt += "\n"
|
||||
|
||||
if false_triggers:
|
||||
prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n"
|
||||
for r in false_triggers:
|
||||
prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
|
||||
prompt += "\n"
|
||||
|
||||
if history:
|
||||
prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n"
|
||||
for h in history:
|
||||
train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}"
|
||||
test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None
|
||||
score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "")
|
||||
prompt += f'<attempt {score_str}>\n'
|
||||
prompt += f'Description: "{h["description"]}"\n'
|
||||
if "results" in h:
|
||||
prompt += "Train results:\n"
|
||||
for r in h["results"]:
|
||||
status = "PASS" if r["pass"] else "FAIL"
|
||||
prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n'
|
||||
if h.get("note"):
|
||||
prompt += f'Note: {h["note"]}\n'
|
||||
prompt += "</attempt>\n\n"
|
||||
|
||||
prompt += f"""</scores_summary>
|
||||
|
||||
Skill content (for context on what the skill does):
|
||||
<skill_content>
|
||||
{skill_content}
|
||||
</skill_content>
|
||||
|
||||
Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
|
||||
|
||||
1. Avoid overfitting
|
||||
2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description.
|
||||
|
||||
Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. There is a hard limit of 1024 characters — descriptions over that will be truncated, so stay comfortably under it.
|
||||
|
||||
Here are some tips that we've found to work well in writing these descriptions:
|
||||
- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does"
|
||||
- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
|
||||
- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable.
|
||||
- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
|
||||
|
||||
I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end.
|
||||
|
||||
Please respond with only the new description text in <new_description> tags, nothing else."""
|
||||
|
||||
text = _call_claude(prompt, model)
|
||||
|
||||
match = re.search(r"<new_description>(.*?)</new_description>", text, re.DOTALL)
|
||||
description = match.group(1).strip().strip('"') if match else text.strip().strip('"')
|
||||
|
||||
transcript: dict = {
|
||||
"iteration": iteration,
|
||||
"prompt": prompt,
|
||||
"response": text,
|
||||
"parsed_description": description,
|
||||
"char_count": len(description),
|
||||
"over_limit": len(description) > 1024,
|
||||
}
|
||||
|
||||
# Safety net: the prompt already states the 1024-char hard limit, but if
|
||||
# the model blew past it anyway, make one fresh single-turn call that
|
||||
# quotes the too-long version and asks for a shorter rewrite. (The old
|
||||
# SDK path did this as a true multi-turn; `claude -p` is one-shot, so we
|
||||
# inline the prior output into the new prompt instead.)
|
||||
if len(description) > 1024:
|
||||
shorten_prompt = (
|
||||
f"{prompt}\n\n"
|
||||
f"---\n\n"
|
||||
f"A previous attempt produced this description, which at "
|
||||
f"{len(description)} characters is over the 1024-character hard limit:\n\n"
|
||||
f'"{description}"\n\n'
|
||||
f"Rewrite it to be under 1024 characters while keeping the most "
|
||||
f"important trigger words and intent coverage. Respond with only "
|
||||
f"the new description in <new_description> tags."
|
||||
)
|
||||
shorten_text = _call_claude(shorten_prompt, model)
|
||||
match = re.search(r"<new_description>(.*?)</new_description>", shorten_text, re.DOTALL)
|
||||
shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"')
|
||||
|
||||
transcript["rewrite_prompt"] = shorten_prompt
|
||||
transcript["rewrite_response"] = shorten_text
|
||||
transcript["rewrite_description"] = shortened
|
||||
transcript["rewrite_char_count"] = len(shortened)
|
||||
description = shortened
|
||||
|
||||
transcript["final_description"] = description
|
||||
|
||||
if log_dir:
|
||||
log_dir.mkdir(parents=True, exist_ok=True)
|
||||
log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json"
|
||||
log_file.write_text(json.dumps(transcript, indent=2))
|
||||
|
||||
return description
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Improve a skill description based on eval results")
|
||||
parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)")
|
||||
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||
parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)")
|
||||
parser.add_argument("--model", required=True, help="Model for improvement")
|
||||
parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr")
|
||||
args = parser.parse_args()
|
||||
|
||||
skill_path = Path(args.skill_path)
|
||||
if not (skill_path / "SKILL.md").exists():
|
||||
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
eval_results = json.loads(Path(args.eval_results).read_text())
|
||||
history = []
|
||||
if args.history:
|
||||
history = json.loads(Path(args.history).read_text())
|
||||
|
||||
name, _, content = parse_skill_md(skill_path)
|
||||
current_description = eval_results["description"]
|
||||
|
||||
if args.verbose:
|
||||
print(f"Current: {current_description}", file=sys.stderr)
|
||||
print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr)
|
||||
|
||||
new_description = improve_description(
|
||||
skill_name=name,
|
||||
skill_content=content,
|
||||
current_description=current_description,
|
||||
eval_results=eval_results,
|
||||
history=history,
|
||||
model=args.model,
|
||||
)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Improved: {new_description}", file=sys.stderr)
|
||||
|
||||
# Output as JSON with both the new description and updated history
|
||||
output = {
|
||||
"description": new_description,
|
||||
"history": history + [{
|
||||
"description": current_description,
|
||||
"passed": eval_results["summary"]["passed"],
|
||||
"failed": eval_results["summary"]["failed"],
|
||||
"total": eval_results["summary"]["total"],
|
||||
"results": eval_results["results"],
|
||||
}],
|
||||
}
|
||||
print(json.dumps(output, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+136
@@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Packager - Creates a distributable .skill file of a skill folder
|
||||
|
||||
Usage:
|
||||
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||
|
||||
Example:
|
||||
python utils/package_skill.py skills/public/my-skill
|
||||
python utils/package_skill.py skills/public/my-skill ./dist
|
||||
"""
|
||||
|
||||
import fnmatch
|
||||
import sys
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from scripts.quick_validate import validate_skill
|
||||
|
||||
# Patterns to exclude when packaging skills.
|
||||
EXCLUDE_DIRS = {"__pycache__", "node_modules"}
|
||||
EXCLUDE_GLOBS = {"*.pyc"}
|
||||
EXCLUDE_FILES = {".DS_Store"}
|
||||
# Directories excluded only at the skill root (not when nested deeper).
|
||||
ROOT_EXCLUDE_DIRS = {"evals"}
|
||||
|
||||
|
||||
def should_exclude(rel_path: Path) -> bool:
|
||||
"""Check if a path should be excluded from packaging."""
|
||||
parts = rel_path.parts
|
||||
if any(part in EXCLUDE_DIRS for part in parts):
|
||||
return True
|
||||
# rel_path is relative to skill_path.parent, so parts[0] is the skill
|
||||
# folder name and parts[1] (if present) is the first subdir.
|
||||
if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS:
|
||||
return True
|
||||
name = rel_path.name
|
||||
if name in EXCLUDE_FILES:
|
||||
return True
|
||||
return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS)
|
||||
|
||||
|
||||
def package_skill(skill_path, output_dir=None):
|
||||
"""
|
||||
Package a skill folder into a .skill file.
|
||||
|
||||
Args:
|
||||
skill_path: Path to the skill folder
|
||||
output_dir: Optional output directory for the .skill file (defaults to current directory)
|
||||
|
||||
Returns:
|
||||
Path to the created .skill file, or None if error
|
||||
"""
|
||||
skill_path = Path(skill_path).resolve()
|
||||
|
||||
# Validate skill folder exists
|
||||
if not skill_path.exists():
|
||||
print(f"❌ Error: Skill folder not found: {skill_path}")
|
||||
return None
|
||||
|
||||
if not skill_path.is_dir():
|
||||
print(f"❌ Error: Path is not a directory: {skill_path}")
|
||||
return None
|
||||
|
||||
# Validate SKILL.md exists
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
print(f"❌ Error: SKILL.md not found in {skill_path}")
|
||||
return None
|
||||
|
||||
# Run validation before packaging
|
||||
print("🔍 Validating skill...")
|
||||
valid, message = validate_skill(skill_path)
|
||||
if not valid:
|
||||
print(f"❌ Validation failed: {message}")
|
||||
print(" Please fix the validation errors before packaging.")
|
||||
return None
|
||||
print(f"✅ {message}\n")
|
||||
|
||||
# Determine output location
|
||||
skill_name = skill_path.name
|
||||
if output_dir:
|
||||
output_path = Path(output_dir).resolve()
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
output_path = Path.cwd()
|
||||
|
||||
skill_filename = output_path / f"{skill_name}.skill"
|
||||
|
||||
# Create the .skill file (zip format)
|
||||
try:
|
||||
with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
# Walk through the skill directory, excluding build artifacts
|
||||
for file_path in skill_path.rglob('*'):
|
||||
if not file_path.is_file():
|
||||
continue
|
||||
arcname = file_path.relative_to(skill_path.parent)
|
||||
if should_exclude(arcname):
|
||||
print(f" Skipped: {arcname}")
|
||||
continue
|
||||
zipf.write(file_path, arcname)
|
||||
print(f" Added: {arcname}")
|
||||
|
||||
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
|
||||
return skill_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating .skill file: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||
print("\nExample:")
|
||||
print(" python utils/package_skill.py skills/public/my-skill")
|
||||
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||
sys.exit(1)
|
||||
|
||||
skill_path = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"📦 Packaging skill: {skill_path}")
|
||||
if output_dir:
|
||||
print(f" Output directory: {output_dir}")
|
||||
print()
|
||||
|
||||
result = package_skill(skill_path, output_dir)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+103
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick validation script for skills - minimal version
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
def validate_skill(skill_path):
|
||||
"""Basic validation of a skill"""
|
||||
skill_path = Path(skill_path)
|
||||
|
||||
# Check SKILL.md exists
|
||||
skill_md = skill_path / 'SKILL.md'
|
||||
if not skill_md.exists():
|
||||
return False, "SKILL.md not found"
|
||||
|
||||
# Read and validate frontmatter
|
||||
content = skill_md.read_text()
|
||||
if not content.startswith('---'):
|
||||
return False, "No YAML frontmatter found"
|
||||
|
||||
# Extract frontmatter
|
||||
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
|
||||
if not match:
|
||||
return False, "Invalid frontmatter format"
|
||||
|
||||
frontmatter_text = match.group(1)
|
||||
|
||||
# Parse YAML frontmatter
|
||||
try:
|
||||
frontmatter = yaml.safe_load(frontmatter_text)
|
||||
if not isinstance(frontmatter, dict):
|
||||
return False, "Frontmatter must be a YAML dictionary"
|
||||
except yaml.YAMLError as e:
|
||||
return False, f"Invalid YAML in frontmatter: {e}"
|
||||
|
||||
# Define allowed properties
|
||||
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
|
||||
|
||||
# Check for unexpected properties (excluding nested keys under metadata)
|
||||
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
|
||||
if unexpected_keys:
|
||||
return False, (
|
||||
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
|
||||
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
|
||||
)
|
||||
|
||||
# Check required fields
|
||||
if 'name' not in frontmatter:
|
||||
return False, "Missing 'name' in frontmatter"
|
||||
if 'description' not in frontmatter:
|
||||
return False, "Missing 'description' in frontmatter"
|
||||
|
||||
# Extract name for validation
|
||||
name = frontmatter.get('name', '')
|
||||
if not isinstance(name, str):
|
||||
return False, f"Name must be a string, got {type(name).__name__}"
|
||||
name = name.strip()
|
||||
if name:
|
||||
# Check naming convention (kebab-case: lowercase with hyphens)
|
||||
if not re.match(r'^[a-z0-9-]+$', name):
|
||||
return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
|
||||
if name.startswith('-') or name.endswith('-') or '--' in name:
|
||||
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
|
||||
# Check name length (max 64 characters per spec)
|
||||
if len(name) > 64:
|
||||
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
|
||||
|
||||
# Extract and validate description
|
||||
description = frontmatter.get('description', '')
|
||||
if not isinstance(description, str):
|
||||
return False, f"Description must be a string, got {type(description).__name__}"
|
||||
description = description.strip()
|
||||
if description:
|
||||
# Check for angle brackets
|
||||
if '<' in description or '>' in description:
|
||||
return False, "Description cannot contain angle brackets (< or >)"
|
||||
# Check description length (max 1024 characters per spec)
|
||||
if len(description) > 1024:
|
||||
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
|
||||
|
||||
# Validate compatibility field if present (optional)
|
||||
compatibility = frontmatter.get('compatibility', '')
|
||||
if compatibility:
|
||||
if not isinstance(compatibility, str):
|
||||
return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
|
||||
if len(compatibility) > 500:
|
||||
return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
|
||||
|
||||
return True, "Skill is valid!"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python quick_validate.py <skill_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
valid, message = validate_skill(sys.argv[1])
|
||||
print(message)
|
||||
sys.exit(0 if valid else 1)
|
||||
+310
@@ -0,0 +1,310 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Run trigger evaluation for a skill description.
|
||||
|
||||
Tests whether a skill's description causes Claude to trigger (read the skill)
|
||||
for a set of queries. Outputs results as JSON.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import select
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
from concurrent.futures import ProcessPoolExecutor, as_completed
|
||||
from pathlib import Path
|
||||
|
||||
from scripts.utils import parse_skill_md
|
||||
|
||||
|
||||
def find_project_root() -> Path:
|
||||
"""Find the project root by walking up from cwd looking for .claude/.
|
||||
|
||||
Mimics how Claude Code discovers its project root, so the command file
|
||||
we create ends up where claude -p will look for it.
|
||||
"""
|
||||
current = Path.cwd()
|
||||
for parent in [current, *current.parents]:
|
||||
if (parent / ".claude").is_dir():
|
||||
return parent
|
||||
return current
|
||||
|
||||
|
||||
def run_single_query(
|
||||
query: str,
|
||||
skill_name: str,
|
||||
skill_description: str,
|
||||
timeout: int,
|
||||
project_root: str,
|
||||
model: str | None = None,
|
||||
) -> bool:
|
||||
"""Run a single query and return whether the skill was triggered.
|
||||
|
||||
Creates a command file in .claude/commands/ so it appears in Claude's
|
||||
available_skills list, then runs `claude -p` with the raw query.
|
||||
Uses --include-partial-messages to detect triggering early from
|
||||
stream events (content_block_start) rather than waiting for the
|
||||
full assistant message, which only arrives after tool execution.
|
||||
"""
|
||||
unique_id = uuid.uuid4().hex[:8]
|
||||
clean_name = f"{skill_name}-skill-{unique_id}"
|
||||
project_commands_dir = Path(project_root) / ".claude" / "commands"
|
||||
command_file = project_commands_dir / f"{clean_name}.md"
|
||||
|
||||
try:
|
||||
project_commands_dir.mkdir(parents=True, exist_ok=True)
|
||||
# Use YAML block scalar to avoid breaking on quotes in description
|
||||
indented_desc = "\n ".join(skill_description.split("\n"))
|
||||
command_content = (
|
||||
f"---\n"
|
||||
f"description: |\n"
|
||||
f" {indented_desc}\n"
|
||||
f"---\n\n"
|
||||
f"# {skill_name}\n\n"
|
||||
f"This skill handles: {skill_description}\n"
|
||||
)
|
||||
command_file.write_text(command_content)
|
||||
|
||||
cmd = [
|
||||
"claude",
|
||||
"-p", query,
|
||||
"--output-format", "stream-json",
|
||||
"--verbose",
|
||||
"--include-partial-messages",
|
||||
]
|
||||
if model:
|
||||
cmd.extend(["--model", model])
|
||||
|
||||
# Remove CLAUDECODE env var to allow nesting claude -p inside a
|
||||
# Claude Code session. The guard is for interactive terminal conflicts;
|
||||
# programmatic subprocess usage is safe.
|
||||
env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
|
||||
|
||||
process = subprocess.Popen(
|
||||
cmd,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.DEVNULL,
|
||||
cwd=project_root,
|
||||
env=env,
|
||||
)
|
||||
|
||||
triggered = False
|
||||
start_time = time.time()
|
||||
buffer = ""
|
||||
# Track state for stream event detection
|
||||
pending_tool_name = None
|
||||
accumulated_json = ""
|
||||
|
||||
try:
|
||||
while time.time() - start_time < timeout:
|
||||
if process.poll() is not None:
|
||||
remaining = process.stdout.read()
|
||||
if remaining:
|
||||
buffer += remaining.decode("utf-8", errors="replace")
|
||||
break
|
||||
|
||||
ready, _, _ = select.select([process.stdout], [], [], 1.0)
|
||||
if not ready:
|
||||
continue
|
||||
|
||||
chunk = os.read(process.stdout.fileno(), 8192)
|
||||
if not chunk:
|
||||
break
|
||||
buffer += chunk.decode("utf-8", errors="replace")
|
||||
|
||||
while "\n" in buffer:
|
||||
line, buffer = buffer.split("\n", 1)
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
try:
|
||||
event = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# Early detection via stream events
|
||||
if event.get("type") == "stream_event":
|
||||
se = event.get("event", {})
|
||||
se_type = se.get("type", "")
|
||||
|
||||
if se_type == "content_block_start":
|
||||
cb = se.get("content_block", {})
|
||||
if cb.get("type") == "tool_use":
|
||||
tool_name = cb.get("name", "")
|
||||
if tool_name in ("Skill", "Read"):
|
||||
pending_tool_name = tool_name
|
||||
accumulated_json = ""
|
||||
else:
|
||||
return False
|
||||
|
||||
elif se_type == "content_block_delta" and pending_tool_name:
|
||||
delta = se.get("delta", {})
|
||||
if delta.get("type") == "input_json_delta":
|
||||
accumulated_json += delta.get("partial_json", "")
|
||||
if clean_name in accumulated_json:
|
||||
return True
|
||||
|
||||
elif se_type in ("content_block_stop", "message_stop"):
|
||||
if pending_tool_name:
|
||||
return clean_name in accumulated_json
|
||||
if se_type == "message_stop":
|
||||
return False
|
||||
|
||||
# Fallback: full assistant message
|
||||
elif event.get("type") == "assistant":
|
||||
message = event.get("message", {})
|
||||
for content_item in message.get("content", []):
|
||||
if content_item.get("type") != "tool_use":
|
||||
continue
|
||||
tool_name = content_item.get("name", "")
|
||||
tool_input = content_item.get("input", {})
|
||||
if tool_name == "Skill" and clean_name in tool_input.get("skill", ""):
|
||||
triggered = True
|
||||
elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""):
|
||||
triggered = True
|
||||
return triggered
|
||||
|
||||
elif event.get("type") == "result":
|
||||
return triggered
|
||||
finally:
|
||||
# Clean up process on any exit path (return, exception, timeout)
|
||||
if process.poll() is None:
|
||||
process.kill()
|
||||
process.wait()
|
||||
|
||||
return triggered
|
||||
finally:
|
||||
if command_file.exists():
|
||||
command_file.unlink()
|
||||
|
||||
|
||||
def run_eval(
|
||||
eval_set: list[dict],
|
||||
skill_name: str,
|
||||
description: str,
|
||||
num_workers: int,
|
||||
timeout: int,
|
||||
project_root: Path,
|
||||
runs_per_query: int = 1,
|
||||
trigger_threshold: float = 0.5,
|
||||
model: str | None = None,
|
||||
) -> dict:
|
||||
"""Run the full eval set and return results."""
|
||||
results = []
|
||||
|
||||
with ProcessPoolExecutor(max_workers=num_workers) as executor:
|
||||
future_to_info = {}
|
||||
for item in eval_set:
|
||||
for run_idx in range(runs_per_query):
|
||||
future = executor.submit(
|
||||
run_single_query,
|
||||
item["query"],
|
||||
skill_name,
|
||||
description,
|
||||
timeout,
|
||||
str(project_root),
|
||||
model,
|
||||
)
|
||||
future_to_info[future] = (item, run_idx)
|
||||
|
||||
query_triggers: dict[str, list[bool]] = {}
|
||||
query_items: dict[str, dict] = {}
|
||||
for future in as_completed(future_to_info):
|
||||
item, _ = future_to_info[future]
|
||||
query = item["query"]
|
||||
query_items[query] = item
|
||||
if query not in query_triggers:
|
||||
query_triggers[query] = []
|
||||
try:
|
||||
query_triggers[query].append(future.result())
|
||||
except Exception as e:
|
||||
print(f"Warning: query failed: {e}", file=sys.stderr)
|
||||
query_triggers[query].append(False)
|
||||
|
||||
for query, triggers in query_triggers.items():
|
||||
item = query_items[query]
|
||||
trigger_rate = sum(triggers) / len(triggers)
|
||||
should_trigger = item["should_trigger"]
|
||||
if should_trigger:
|
||||
did_pass = trigger_rate >= trigger_threshold
|
||||
else:
|
||||
did_pass = trigger_rate < trigger_threshold
|
||||
results.append({
|
||||
"query": query,
|
||||
"should_trigger": should_trigger,
|
||||
"trigger_rate": trigger_rate,
|
||||
"triggers": sum(triggers),
|
||||
"runs": len(triggers),
|
||||
"pass": did_pass,
|
||||
})
|
||||
|
||||
passed = sum(1 for r in results if r["pass"])
|
||||
total = len(results)
|
||||
|
||||
return {
|
||||
"skill_name": skill_name,
|
||||
"description": description,
|
||||
"results": results,
|
||||
"summary": {
|
||||
"total": total,
|
||||
"passed": passed,
|
||||
"failed": total - passed,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description")
|
||||
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
|
||||
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||
parser.add_argument("--description", default=None, help="Override description to test")
|
||||
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
|
||||
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
|
||||
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
|
||||
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
|
||||
parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)")
|
||||
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||
args = parser.parse_args()
|
||||
|
||||
eval_set = json.loads(Path(args.eval_set).read_text())
|
||||
skill_path = Path(args.skill_path)
|
||||
|
||||
if not (skill_path / "SKILL.md").exists():
|
||||
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
name, original_description, content = parse_skill_md(skill_path)
|
||||
description = args.description or original_description
|
||||
project_root = find_project_root()
|
||||
|
||||
if args.verbose:
|
||||
print(f"Evaluating: {description}", file=sys.stderr)
|
||||
|
||||
output = run_eval(
|
||||
eval_set=eval_set,
|
||||
skill_name=name,
|
||||
description=description,
|
||||
num_workers=args.num_workers,
|
||||
timeout=args.timeout,
|
||||
project_root=project_root,
|
||||
runs_per_query=args.runs_per_query,
|
||||
trigger_threshold=args.trigger_threshold,
|
||||
model=args.model,
|
||||
)
|
||||
|
||||
if args.verbose:
|
||||
summary = output["summary"]
|
||||
print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr)
|
||||
for r in output["results"]:
|
||||
status = "PASS" if r["pass"] else "FAIL"
|
||||
rate_str = f"{r['triggers']}/{r['runs']}"
|
||||
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr)
|
||||
|
||||
print(json.dumps(output, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+328
@@ -0,0 +1,328 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Run the eval + improve loop until all pass or max iterations reached.
|
||||
|
||||
Combines run_eval.py and improve_description.py in a loop, tracking history
|
||||
and returning the best description found. Supports train/test split to prevent
|
||||
overfitting.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import random
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import webbrowser
|
||||
from pathlib import Path
|
||||
|
||||
from scripts.generate_report import generate_html
|
||||
from scripts.improve_description import improve_description
|
||||
from scripts.run_eval import find_project_root, run_eval
|
||||
from scripts.utils import parse_skill_md
|
||||
|
||||
|
||||
def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]:
|
||||
"""Split eval set into train and test sets, stratified by should_trigger."""
|
||||
random.seed(seed)
|
||||
|
||||
# Separate by should_trigger
|
||||
trigger = [e for e in eval_set if e["should_trigger"]]
|
||||
no_trigger = [e for e in eval_set if not e["should_trigger"]]
|
||||
|
||||
# Shuffle each group
|
||||
random.shuffle(trigger)
|
||||
random.shuffle(no_trigger)
|
||||
|
||||
# Calculate split points
|
||||
n_trigger_test = max(1, int(len(trigger) * holdout))
|
||||
n_no_trigger_test = max(1, int(len(no_trigger) * holdout))
|
||||
|
||||
# Split
|
||||
test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test]
|
||||
train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:]
|
||||
|
||||
return train_set, test_set
|
||||
|
||||
|
||||
def run_loop(
|
||||
eval_set: list[dict],
|
||||
skill_path: Path,
|
||||
description_override: str | None,
|
||||
num_workers: int,
|
||||
timeout: int,
|
||||
max_iterations: int,
|
||||
runs_per_query: int,
|
||||
trigger_threshold: float,
|
||||
holdout: float,
|
||||
model: str,
|
||||
verbose: bool,
|
||||
live_report_path: Path | None = None,
|
||||
log_dir: Path | None = None,
|
||||
) -> dict:
|
||||
"""Run the eval + improvement loop."""
|
||||
project_root = find_project_root()
|
||||
name, original_description, content = parse_skill_md(skill_path)
|
||||
current_description = description_override or original_description
|
||||
|
||||
# Split into train/test if holdout > 0
|
||||
if holdout > 0:
|
||||
train_set, test_set = split_eval_set(eval_set, holdout)
|
||||
if verbose:
|
||||
print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr)
|
||||
else:
|
||||
train_set = eval_set
|
||||
test_set = []
|
||||
|
||||
history = []
|
||||
exit_reason = "unknown"
|
||||
|
||||
for iteration in range(1, max_iterations + 1):
|
||||
if verbose:
|
||||
print(f"\n{'='*60}", file=sys.stderr)
|
||||
print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr)
|
||||
print(f"Description: {current_description}", file=sys.stderr)
|
||||
print(f"{'='*60}", file=sys.stderr)
|
||||
|
||||
# Evaluate train + test together in one batch for parallelism
|
||||
all_queries = train_set + test_set
|
||||
t0 = time.time()
|
||||
all_results = run_eval(
|
||||
eval_set=all_queries,
|
||||
skill_name=name,
|
||||
description=current_description,
|
||||
num_workers=num_workers,
|
||||
timeout=timeout,
|
||||
project_root=project_root,
|
||||
runs_per_query=runs_per_query,
|
||||
trigger_threshold=trigger_threshold,
|
||||
model=model,
|
||||
)
|
||||
eval_elapsed = time.time() - t0
|
||||
|
||||
# Split results back into train/test by matching queries
|
||||
train_queries_set = {q["query"] for q in train_set}
|
||||
train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set]
|
||||
test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set]
|
||||
|
||||
train_passed = sum(1 for r in train_result_list if r["pass"])
|
||||
train_total = len(train_result_list)
|
||||
train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total}
|
||||
train_results = {"results": train_result_list, "summary": train_summary}
|
||||
|
||||
if test_set:
|
||||
test_passed = sum(1 for r in test_result_list if r["pass"])
|
||||
test_total = len(test_result_list)
|
||||
test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total}
|
||||
test_results = {"results": test_result_list, "summary": test_summary}
|
||||
else:
|
||||
test_results = None
|
||||
test_summary = None
|
||||
|
||||
history.append({
|
||||
"iteration": iteration,
|
||||
"description": current_description,
|
||||
"train_passed": train_summary["passed"],
|
||||
"train_failed": train_summary["failed"],
|
||||
"train_total": train_summary["total"],
|
||||
"train_results": train_results["results"],
|
||||
"test_passed": test_summary["passed"] if test_summary else None,
|
||||
"test_failed": test_summary["failed"] if test_summary else None,
|
||||
"test_total": test_summary["total"] if test_summary else None,
|
||||
"test_results": test_results["results"] if test_results else None,
|
||||
# For backward compat with report generator
|
||||
"passed": train_summary["passed"],
|
||||
"failed": train_summary["failed"],
|
||||
"total": train_summary["total"],
|
||||
"results": train_results["results"],
|
||||
})
|
||||
|
||||
# Write live report if path provided
|
||||
if live_report_path:
|
||||
partial_output = {
|
||||
"original_description": original_description,
|
||||
"best_description": current_description,
|
||||
"best_score": "in progress",
|
||||
"iterations_run": len(history),
|
||||
"holdout": holdout,
|
||||
"train_size": len(train_set),
|
||||
"test_size": len(test_set),
|
||||
"history": history,
|
||||
}
|
||||
live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name))
|
||||
|
||||
if verbose:
|
||||
def print_eval_stats(label, results, elapsed):
|
||||
pos = [r for r in results if r["should_trigger"]]
|
||||
neg = [r for r in results if not r["should_trigger"]]
|
||||
tp = sum(r["triggers"] for r in pos)
|
||||
pos_runs = sum(r["runs"] for r in pos)
|
||||
fn = pos_runs - tp
|
||||
fp = sum(r["triggers"] for r in neg)
|
||||
neg_runs = sum(r["runs"] for r in neg)
|
||||
tn = neg_runs - fp
|
||||
total = tp + tn + fp + fn
|
||||
precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0
|
||||
recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0
|
||||
accuracy = (tp + tn) / total if total > 0 else 0.0
|
||||
print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr)
|
||||
for r in results:
|
||||
status = "PASS" if r["pass"] else "FAIL"
|
||||
rate_str = f"{r['triggers']}/{r['runs']}"
|
||||
print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr)
|
||||
|
||||
print_eval_stats("Train", train_results["results"], eval_elapsed)
|
||||
if test_summary:
|
||||
print_eval_stats("Test ", test_results["results"], 0)
|
||||
|
||||
if train_summary["failed"] == 0:
|
||||
exit_reason = f"all_passed (iteration {iteration})"
|
||||
if verbose:
|
||||
print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr)
|
||||
break
|
||||
|
||||
if iteration == max_iterations:
|
||||
exit_reason = f"max_iterations ({max_iterations})"
|
||||
if verbose:
|
||||
print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr)
|
||||
break
|
||||
|
||||
# Improve the description based on train results
|
||||
if verbose:
|
||||
print(f"\nImproving description...", file=sys.stderr)
|
||||
|
||||
t0 = time.time()
|
||||
# Strip test scores from history so improvement model can't see them
|
||||
blinded_history = [
|
||||
{k: v for k, v in h.items() if not k.startswith("test_")}
|
||||
for h in history
|
||||
]
|
||||
new_description = improve_description(
|
||||
skill_name=name,
|
||||
skill_content=content,
|
||||
current_description=current_description,
|
||||
eval_results=train_results,
|
||||
history=blinded_history,
|
||||
model=model,
|
||||
log_dir=log_dir,
|
||||
iteration=iteration,
|
||||
)
|
||||
improve_elapsed = time.time() - t0
|
||||
|
||||
if verbose:
|
||||
print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr)
|
||||
|
||||
current_description = new_description
|
||||
|
||||
# Find the best iteration by TEST score (or train if no test set)
|
||||
if test_set:
|
||||
best = max(history, key=lambda h: h["test_passed"] or 0)
|
||||
best_score = f"{best['test_passed']}/{best['test_total']}"
|
||||
else:
|
||||
best = max(history, key=lambda h: h["train_passed"])
|
||||
best_score = f"{best['train_passed']}/{best['train_total']}"
|
||||
|
||||
if verbose:
|
||||
print(f"\nExit reason: {exit_reason}", file=sys.stderr)
|
||||
print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr)
|
||||
|
||||
return {
|
||||
"exit_reason": exit_reason,
|
||||
"original_description": original_description,
|
||||
"best_description": best["description"],
|
||||
"best_score": best_score,
|
||||
"best_train_score": f"{best['train_passed']}/{best['train_total']}",
|
||||
"best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None,
|
||||
"final_description": current_description,
|
||||
"iterations_run": len(history),
|
||||
"holdout": holdout,
|
||||
"train_size": len(train_set),
|
||||
"test_size": len(test_set),
|
||||
"history": history,
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Run eval + improve loop")
|
||||
parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
|
||||
parser.add_argument("--skill-path", required=True, help="Path to skill directory")
|
||||
parser.add_argument("--description", default=None, help="Override starting description")
|
||||
parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
|
||||
parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
|
||||
parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations")
|
||||
parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
|
||||
parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
|
||||
parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)")
|
||||
parser.add_argument("--model", required=True, help="Model for improvement")
|
||||
parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
|
||||
parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)")
|
||||
parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here")
|
||||
args = parser.parse_args()
|
||||
|
||||
eval_set = json.loads(Path(args.eval_set).read_text())
|
||||
skill_path = Path(args.skill_path)
|
||||
|
||||
if not (skill_path / "SKILL.md").exists():
|
||||
print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
name, _, _ = parse_skill_md(skill_path)
|
||||
|
||||
# Set up live report path
|
||||
if args.report != "none":
|
||||
if args.report == "auto":
|
||||
timestamp = time.strftime("%Y%m%d_%H%M%S")
|
||||
live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html"
|
||||
else:
|
||||
live_report_path = Path(args.report)
|
||||
# Open the report immediately so the user can watch
|
||||
live_report_path.write_text("<html><body><h1>Starting optimization loop...</h1><meta http-equiv='refresh' content='5'></body></html>")
|
||||
webbrowser.open(str(live_report_path))
|
||||
else:
|
||||
live_report_path = None
|
||||
|
||||
# Determine output directory (create before run_loop so logs can be written)
|
||||
if args.results_dir:
|
||||
timestamp = time.strftime("%Y-%m-%d_%H%M%S")
|
||||
results_dir = Path(args.results_dir) / timestamp
|
||||
results_dir.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
results_dir = None
|
||||
|
||||
log_dir = results_dir / "logs" if results_dir else None
|
||||
|
||||
output = run_loop(
|
||||
eval_set=eval_set,
|
||||
skill_path=skill_path,
|
||||
description_override=args.description,
|
||||
num_workers=args.num_workers,
|
||||
timeout=args.timeout,
|
||||
max_iterations=args.max_iterations,
|
||||
runs_per_query=args.runs_per_query,
|
||||
trigger_threshold=args.trigger_threshold,
|
||||
holdout=args.holdout,
|
||||
model=args.model,
|
||||
verbose=args.verbose,
|
||||
live_report_path=live_report_path,
|
||||
log_dir=log_dir,
|
||||
)
|
||||
|
||||
# Save JSON output
|
||||
json_output = json.dumps(output, indent=2)
|
||||
print(json_output)
|
||||
if results_dir:
|
||||
(results_dir / "results.json").write_text(json_output)
|
||||
|
||||
# Write final HTML report (without auto-refresh)
|
||||
if live_report_path:
|
||||
live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name))
|
||||
print(f"\nReport: {live_report_path}", file=sys.stderr)
|
||||
|
||||
if results_dir and live_report_path:
|
||||
(results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name))
|
||||
|
||||
if results_dir:
|
||||
print(f"Results saved to: {results_dir}", file=sys.stderr)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,47 @@
|
||||
"""Shared utilities for skill-creator scripts."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
|
||||
def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
|
||||
"""Parse a SKILL.md file, returning (name, description, full_content)."""
|
||||
content = (skill_path / "SKILL.md").read_text()
|
||||
lines = content.split("\n")
|
||||
|
||||
if lines[0].strip() != "---":
|
||||
raise ValueError("SKILL.md missing frontmatter (no opening ---)")
|
||||
|
||||
end_idx = None
|
||||
for i, line in enumerate(lines[1:], start=1):
|
||||
if line.strip() == "---":
|
||||
end_idx = i
|
||||
break
|
||||
|
||||
if end_idx is None:
|
||||
raise ValueError("SKILL.md missing frontmatter (no closing ---)")
|
||||
|
||||
name = ""
|
||||
description = ""
|
||||
frontmatter_lines = lines[1:end_idx]
|
||||
i = 0
|
||||
while i < len(frontmatter_lines):
|
||||
line = frontmatter_lines[i]
|
||||
if line.startswith("name:"):
|
||||
name = line[len("name:"):].strip().strip('"').strip("'")
|
||||
elif line.startswith("description:"):
|
||||
value = line[len("description:"):].strip()
|
||||
# Handle YAML multiline indicators (>, |, >-, |-)
|
||||
if value in (">", "|", ">-", "|-"):
|
||||
continuation_lines: list[str] = []
|
||||
i += 1
|
||||
while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")):
|
||||
continuation_lines.append(frontmatter_lines[i].strip())
|
||||
i += 1
|
||||
description = " ".join(continuation_lines)
|
||||
continue
|
||||
else:
|
||||
description = value.strip('"').strip("'")
|
||||
i += 1
|
||||
|
||||
return name, description, content
|
||||
@@ -0,0 +1,285 @@
|
||||
---
|
||||
name: zeroclaw
|
||||
description: "Help users operate and interact with their ZeroClaw agent instance — through both the CLI (`zeroclaw` commands) and the REST/WebSocket gateway API. Use this skill whenever the user wants to: send messages to ZeroClaw, manage memory or cron jobs, check system status, configure channels or providers, hit the gateway API, troubleshoot their ZeroClaw setup, build from source, or do anything involving the `zeroclaw` binary or its HTTP endpoints. Trigger this even if the user just says things like 'check my agent status', 'schedule a reminder', 'store this in memory', 'list my cron jobs', 'send a message to my bot', 'set up Telegram', 'build zeroclaw', or 'my bot is broken' — these are all ZeroClaw operations."
|
||||
---
|
||||
|
||||
# ZeroClaw Skill
|
||||
|
||||
You are helping a user operate their ZeroClaw agent instance. ZeroClaw is an autonomous agent runtime with a CLI and an HTTP/WebSocket gateway.
|
||||
|
||||
Your job is to understand what the user wants to accomplish and then **execute it** — run the command, make the API call, report the result. Do not just show commands for the user to copy-paste. Actually run them via the Bash tool and tell the user what happened. The only exception is destructive operations (clearing all memory, estop kill-all) where you should confirm first.
|
||||
|
||||
## Adaptive Expertise
|
||||
|
||||
Pay attention to how the user talks. Someone who says "can you hit the webhook endpoint with a POST" is telling you they know what they're doing — be concise, skip explanations, just execute. Someone who says "how do I make my bot remember things" needs more context about what's happening under the hood.
|
||||
|
||||
Signals of technical comfort: mentions specific endpoints, HTTP methods, JSON fields, talks about tokens/auth, uses CLI flags fluently, references config files directly.
|
||||
|
||||
Signals of less familiarity: asks "what does X do", uses casual language about the bot/agent, describes goals rather than mechanisms ("I want it to check something every morning").
|
||||
|
||||
Default to a middle ground — brief explanation of what you're about to do, then do it. Dial up or down from there based on cues.
|
||||
|
||||
## Discovery — Before You Act
|
||||
|
||||
Before running any ZeroClaw operation, make sure you know where things are:
|
||||
|
||||
1. **Find the binary.** Search in this order:
|
||||
- `which zeroclaw` (PATH)
|
||||
- The current project's build output: `./target/release/zeroclaw` or `./target/debug/zeroclaw` — this is the right choice when the user is working inside the ZeroClaw source tree and may have local changes
|
||||
- Common install locations: `~/.cargo/bin/zeroclaw`, `~/Downloads/zeroclaw-bin/zeroclaw`
|
||||
|
||||
If no binary is found anywhere, offer to build from source (see "Building from Source" below). If the user is a developer working on ZeroClaw itself, they'll likely want the local build — watch for cues like them editing source files, mentioning PRs, or being in the project directory.
|
||||
|
||||
2. **Check if the gateway is running** (only needed for REST/WebSocket operations). A quick `curl -sf http://127.0.0.1:42617/health` tells you. If it's not running and the user wants REST access, let them know and offer to start it (`zeroclaw gateway` or `zeroclaw daemon`).
|
||||
|
||||
3. **Check auth status.** If the gateway requires pairing (`require_pairing = true` is the default), REST calls need a bearer token. Run `zeroclaw status` to see the current state, or check `~/.zeroclaw/config.toml` for a stored token under `[gateway]`.
|
||||
|
||||
Cache these findings for the conversation — don't re-discover every time.
|
||||
|
||||
## Important: REPL Limitation
|
||||
|
||||
`zeroclaw agent` (interactive REPL) requires interactive stdin, which doesn't work through the Bash tool. When the user wants to chat with their agent, use single-message mode instead:
|
||||
|
||||
```bash
|
||||
zeroclaw agent -m "the message"
|
||||
```
|
||||
|
||||
Each `-m` invocation is independent (no conversation history between calls). If the user needs multi-turn conversation, let them know they can run `zeroclaw agent` directly in their terminal, or use the WebSocket endpoint for programmatic streaming.
|
||||
|
||||
## First-Time Setup
|
||||
|
||||
If the user hasn't set up ZeroClaw yet (no `~/.zeroclaw/config.toml` exists), guide them through onboarding:
|
||||
|
||||
```bash
|
||||
zeroclaw onboard # Quick mode — defaults to OpenRouter
|
||||
zeroclaw onboard --provider anthropic # Use Anthropic directly
|
||||
zeroclaw onboard --interactive # Step-by-step wizard
|
||||
```
|
||||
|
||||
After onboarding, verify everything works:
|
||||
```bash
|
||||
zeroclaw status
|
||||
zeroclaw doctor
|
||||
```
|
||||
|
||||
If they already have a config but something is broken, `zeroclaw onboard --channels-only` repairs just the channel configuration without overwriting everything else.
|
||||
|
||||
## Building from Source
|
||||
|
||||
If the user wants to build ZeroClaw (or no binary is installed):
|
||||
|
||||
```bash
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
This produces `target/release/zeroclaw`. For faster iteration during development, `cargo build` (debug mode) is quicker but produces a slower binary at `target/debug/zeroclaw`.
|
||||
|
||||
You can also run directly without a separate build step:
|
||||
```bash
|
||||
cargo run --release -- <subcommand> [args]
|
||||
```
|
||||
|
||||
Before building, `cargo check` gives a quick compile validation without the full build.
|
||||
|
||||
## Choosing CLI vs REST
|
||||
|
||||
Both surfaces can do most things. Rules of thumb:
|
||||
|
||||
- **CLI is simpler** for one-off operations from the terminal. It handles auth internally and formats output nicely. Prefer CLI when the user is working locally.
|
||||
- **REST is needed** when the user is building an integration, scripting from another language, or accessing a remote ZeroClaw instance. Also needed for streaming (WebSocket, SSE).
|
||||
- If unclear, **default to CLI** — it's less setup.
|
||||
|
||||
## Core Operations
|
||||
|
||||
### Sending Messages
|
||||
|
||||
**CLI:** `zeroclaw agent -m "your message here"` — remember, always use `-m` mode, not bare `zeroclaw agent`.
|
||||
|
||||
**REST:**
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:42617/webhook \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"message": "your message here"}'
|
||||
```
|
||||
Response: `{"response": "...", "model": "..."}`
|
||||
|
||||
**WebSocket** (for streaming): connect to `ws://127.0.0.1:42617/ws/chat?token=<token>`, send `{"type": "message", "content": "..."}`, receive `{"type": "done", "full_response": "..."}`.
|
||||
|
||||
### System Status
|
||||
|
||||
Run `zeroclaw status` to see provider, model, uptime, channels, memory backend. For deeper diagnostics: `zeroclaw doctor`.
|
||||
|
||||
**REST:** `GET /api/status` (same info as JSON), `GET /health` (no auth, quick ok/not-ok).
|
||||
|
||||
### Memory
|
||||
|
||||
The CLI can list, get, and clear memories but **cannot store** them directly. To store a memory:
|
||||
- Via agent: `zeroclaw agent -m "remember that my favorite color is blue"`
|
||||
- Via REST: `POST /api/memory` with `{"key": "...", "content": "...", "category": "core"}`
|
||||
|
||||
**CLI (read/delete):**
|
||||
- `zeroclaw memory list` — list all entries
|
||||
- `zeroclaw memory list --category core --limit 10` — filtered
|
||||
- `zeroclaw memory get "key-name"` — get specific entry
|
||||
- `zeroclaw memory stats` — usage statistics
|
||||
- `zeroclaw memory clear --key "prefix" --yes` — delete entries (confirm with user first)
|
||||
|
||||
**REST (full CRUD):**
|
||||
- `GET /api/memory` — list all (optional: `?query=search+text&category=core`)
|
||||
- `POST /api/memory` — store: `{"key": "...", "content": "...", "category": "core"}`
|
||||
- `DELETE /api/memory/{key}` — delete entry
|
||||
|
||||
Categories: `core`, `daily`, `conversation`, or any custom string.
|
||||
|
||||
### Cron / Scheduling
|
||||
|
||||
**CLI:**
|
||||
- `zeroclaw cron list` — show all jobs
|
||||
- `zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York` — recurring
|
||||
- `zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me'` — one-time at specific time
|
||||
- `zeroclaw cron add-every 3600000 'Check health'` — interval in ms
|
||||
- `zeroclaw cron once 30m 'Follow up'` — delay from now
|
||||
- `zeroclaw cron pause <id>` / `zeroclaw cron resume <id>` / `zeroclaw cron remove <id>`
|
||||
|
||||
**REST:**
|
||||
- `GET /api/cron` — list jobs
|
||||
- `POST /api/cron` — add: `{"name": "...", "schedule": "0 9 * * *", "command": "..."}`
|
||||
- `DELETE /api/cron/{id}` — remove job
|
||||
|
||||
### Tools
|
||||
|
||||
Tools are used automatically by the agent during conversations (shell, file ops, memory, browser, HTTP, web search, git, etc. — 30+ tools gated by security policy).
|
||||
|
||||
To see what's available: `GET /api/tools` (REST) lists all registered tools with descriptions and parameter schemas.
|
||||
|
||||
### Configuration
|
||||
|
||||
Edit `~/.zeroclaw/config.toml` directly, or re-run `zeroclaw onboard` to reconfigure.
|
||||
|
||||
**REST:**
|
||||
- `GET /api/config` — get current config (secrets masked as `***MASKED***`)
|
||||
- `PUT /api/config` — update config (send raw TOML as body, 1MB limit)
|
||||
|
||||
### Providers & Models
|
||||
|
||||
- `zeroclaw providers` — list all supported providers
|
||||
- `zeroclaw models list` — cached model catalog
|
||||
- `zeroclaw models refresh --all` — refresh from providers
|
||||
- `zeroclaw models set anthropic/claude-sonnet-4-6` — set default model
|
||||
|
||||
Override per-message: `zeroclaw agent -p anthropic --model claude-sonnet-4-6 -m "hello"`
|
||||
|
||||
### Real-Time Events (SSE)
|
||||
|
||||
REST only — useful for building dashboards or monitoring:
|
||||
```bash
|
||||
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
|
||||
```
|
||||
Streams JSON events: `llm_request`, `tool_call_start`, `tool_call`, `agent_start`, `agent_end`, `error`.
|
||||
|
||||
### Cost Tracking
|
||||
|
||||
`GET /api/cost` — returns session/daily/monthly costs, token counts, per-model breakdown.
|
||||
|
||||
### Emergency Stop
|
||||
|
||||
Confirm with the user before running any estop command — these are disruptive.
|
||||
|
||||
- `zeroclaw estop --level kill-all` — stop everything
|
||||
- `zeroclaw estop --level network-kill` — block all network
|
||||
- `zeroclaw estop --level tool-freeze --tool shell` — freeze specific tool
|
||||
- `zeroclaw estop status` — check current estop state
|
||||
- `zeroclaw estop resume --network` — resume
|
||||
|
||||
### Gateway Lifecycle
|
||||
|
||||
- `zeroclaw gateway` — start HTTP gateway (foreground)
|
||||
- `zeroclaw gateway -p 8080 --host 127.0.0.1` — custom bind
|
||||
- `zeroclaw daemon` — start gateway + channels + scheduler + heartbeat
|
||||
- `zeroclaw service install/start/stop/status/uninstall` — OS service management
|
||||
|
||||
### Channels
|
||||
|
||||
ZeroClaw supports 21 messaging channels. To add one, you need to edit `~/.zeroclaw/config.toml`. For example, to set up Telegram:
|
||||
|
||||
```toml
|
||||
[channels]
|
||||
telegram = true
|
||||
|
||||
[channels_config.telegram]
|
||||
bot_token = "your-bot-token-from-botfather"
|
||||
allowed_users = [123456789]
|
||||
```
|
||||
|
||||
Then restart the daemon. Check channel health with `zeroclaw channels doctor`.
|
||||
|
||||
For the full list of channels and their config fields, read `references/cli-reference.md` (Channels section).
|
||||
|
||||
### Pairing (Authentication Setup)
|
||||
|
||||
When `require_pairing = true` (default), REST clients need a bearer token:
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:42617/pair -H "X-Pairing-Code: <code>"
|
||||
```
|
||||
Response includes `{"token": "..."}` — save this for subsequent requests.
|
||||
|
||||
## Common Workflows
|
||||
|
||||
Here are multi-step sequences you're likely to need:
|
||||
|
||||
**"Is my agent healthy?"**
|
||||
1. Run `zeroclaw status` — check provider, model, channels
|
||||
2. Run `zeroclaw doctor` — check connectivity, diagnose issues
|
||||
3. If gateway needed: `curl -sf http://127.0.0.1:42617/health`
|
||||
|
||||
**"Set up a new channel"**
|
||||
1. Read the current config: `cat ~/.zeroclaw/config.toml`
|
||||
2. Add the channel config (edit the TOML)
|
||||
3. Restart: `zeroclaw service restart` (or restart daemon manually)
|
||||
4. Verify: `zeroclaw channels doctor`
|
||||
|
||||
**"Switch to a different model"**
|
||||
1. Check available: `zeroclaw models list`
|
||||
2. Set it: `zeroclaw models set <provider/model>`
|
||||
3. Verify: `zeroclaw status`
|
||||
4. Test: `zeroclaw agent -m "hello, what model are you?"`
|
||||
|
||||
## Gateway Defaults
|
||||
|
||||
- **Port:** 42617
|
||||
- **Host:** 127.0.0.1
|
||||
- **Auth:** Pairing required (bearer token)
|
||||
- **Rate limits:** 60 webhook requests/min, 10 pairing attempts/min
|
||||
- **Body limit:** 64KB (1MB for config updates)
|
||||
- **Timeout:** 30 seconds
|
||||
- **Idempotency:** Optional `X-Idempotency-Key` header on `/webhook` (300s TTL)
|
||||
- **Config location:** `~/.zeroclaw/config.toml`
|
||||
|
||||
## Reference Files
|
||||
|
||||
For the complete API specification with every endpoint, field, and edge case, read `references/rest-api.md`.
|
||||
|
||||
For the full CLI command tree with all flags and options, read `references/cli-reference.md`.
|
||||
|
||||
Only load these when you need precise details beyond what's in this file — for most operations, the quick references above are sufficient.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"zeroclaw: command not found"** — Binary not in PATH. Check `./target/release/zeroclaw`, `~/.cargo/bin/zeroclaw`, or build from source with `cargo build --release`.
|
||||
|
||||
**"Connection refused" on REST calls** — Gateway isn't running. Start it with `zeroclaw gateway` or `zeroclaw daemon`.
|
||||
|
||||
**"Unauthorized" (401/403)** — Bearer token is missing or invalid. Re-pair via `POST /pair` with the pairing code, or check `~/.zeroclaw/config.toml` for the stored token.
|
||||
|
||||
**"LLM request failed" (500)** — Provider issue. Run `zeroclaw doctor` to check connectivity. Common causes: expired API key, provider outage, rate limiting on the provider side.
|
||||
|
||||
**"Too many requests" (429)** — You're hitting ZeroClaw's rate limit. Back off — the response includes `retry_after` with the number of seconds to wait.
|
||||
|
||||
**Agent not using tools / acting limited** — Check autonomy settings in config.toml under `[autonomy]`. `level = "read_only"` disables most tools. Try `level = "supervised"` or `level = "full"`.
|
||||
|
||||
**Memory not persisting** — Check `[memory]` config. If `backend = "none"`, nothing is stored. Switch to `"sqlite"` or `"markdown"`. Also verify `auto_save = true`.
|
||||
|
||||
**Channel not responding** — Run `zeroclaw channels doctor` for the specific channel. Common issues: expired bot token, wrong allowed_users list, channel not enabled in `[channels]`.
|
||||
|
||||
Report errors to the user with context appropriate to their expertise level. For beginners, explain what went wrong and suggest the fix. For experts, just show the error and the fix.
|
||||
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"skill_name": "zeroclaw",
|
||||
"evals": [
|
||||
{
|
||||
"id": 0,
|
||||
"prompt": "how do i make my bot remember my name",
|
||||
"expected_output": "Executes a zeroclaw command to store a memory, explains what happened in beginner-friendly language",
|
||||
"files": []
|
||||
},
|
||||
{
|
||||
"id": 1,
|
||||
"prompt": "I want to schedule a daily health check on my ZeroClaw instance every morning at 9am ET",
|
||||
"expected_output": "Executes zeroclaw cron add with correct cron expression and timezone flag",
|
||||
"files": []
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"prompt": "Set up a Python script that monitors my ZeroClaw agent's activity via SSE and logs tool calls to a file",
|
||||
"expected_output": "Writes a Python script that connects to /api/events SSE endpoint with auth, filters for tool_call events, and logs to a file",
|
||||
"files": []
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,277 @@
|
||||
# ZeroClaw CLI Reference
|
||||
|
||||
Complete command reference for the `zeroclaw` binary.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Agent](#agent)
|
||||
2. [Onboarding](#onboarding)
|
||||
3. [Status & Diagnostics](#status--diagnostics)
|
||||
4. [Memory](#memory)
|
||||
5. [Cron](#cron)
|
||||
6. [Providers & Models](#providers--models)
|
||||
7. [Gateway & Daemon](#gateway--daemon)
|
||||
8. [Service Management](#service-management)
|
||||
9. [Channels](#channels)
|
||||
10. [Security & Emergency Stop](#security--emergency-stop)
|
||||
11. [Hardware Peripherals](#hardware-peripherals)
|
||||
12. [Skills](#skills)
|
||||
13. [Shell Completions](#shell-completions)
|
||||
|
||||
---
|
||||
|
||||
## Agent
|
||||
|
||||
Interactive chat or single-message mode.
|
||||
|
||||
```bash
|
||||
zeroclaw agent # Interactive REPL
|
||||
zeroclaw agent -m "Summarize today's logs" # Single message
|
||||
zeroclaw agent -p anthropic --model claude-sonnet-4-6 # Override provider/model
|
||||
zeroclaw agent -t 0.3 # Set temperature
|
||||
zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0 # Attach hardware
|
||||
```
|
||||
|
||||
**Key flags:**
|
||||
- `-m <message>` — single message mode (no REPL)
|
||||
- `-p <provider>` — override provider (openrouter, anthropic, openai, ollama)
|
||||
- `--model <model>` — override model
|
||||
- `-t <float>` — temperature (0.0–2.0)
|
||||
- `--peripheral <name>:<port>` — attach hardware peripheral
|
||||
|
||||
The agent has access to 30+ tools gated by security policy: shell, file_read, file_write, file_edit, glob_search, content_search, memory_store, memory_recall, memory_forget, browser, http_request, web_fetch, web_search, cron, delegate, git, and more. Max tool iterations defaults to 10.
|
||||
|
||||
---
|
||||
|
||||
## Onboarding
|
||||
|
||||
First-time setup or reconfiguration.
|
||||
|
||||
```bash
|
||||
zeroclaw onboard # Quick mode (default: openrouter)
|
||||
zeroclaw onboard --provider anthropic # Quick mode with specific provider
|
||||
zeroclaw onboard --interactive # Interactive wizard
|
||||
zeroclaw onboard --memory sqlite # Set memory backend
|
||||
zeroclaw onboard --force # Overwrite existing config
|
||||
zeroclaw onboard --channels-only # Repair channels only
|
||||
```
|
||||
|
||||
**Key flags:**
|
||||
- `--provider <name>` — openrouter (default), anthropic, openai, ollama
|
||||
- `--model <model>` — default model
|
||||
- `--memory <backend>` — sqlite, markdown, lucid, none
|
||||
- `--force` — overwrite existing config.toml
|
||||
- `--channels-only` — only repair channel configuration
|
||||
- `--interactive` — step-by-step wizard
|
||||
|
||||
Creates `~/.zeroclaw/config.toml` with `0600` permissions.
|
||||
|
||||
---
|
||||
|
||||
## Status & Diagnostics
|
||||
|
||||
```bash
|
||||
zeroclaw status # System overview
|
||||
zeroclaw doctor # Run all diagnostic checks
|
||||
zeroclaw doctor models # Probe model connectivity
|
||||
zeroclaw doctor traces # Query execution traces
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Memory
|
||||
|
||||
```bash
|
||||
zeroclaw memory list # List all entries
|
||||
zeroclaw memory list --category core --limit 10 # Filtered list
|
||||
zeroclaw memory get "some-key" # Get specific entry
|
||||
zeroclaw memory stats # Usage statistics
|
||||
zeroclaw memory clear --key "prefix" --yes # Delete entries (requires --yes)
|
||||
```
|
||||
|
||||
**Key flags:**
|
||||
- `--category <name>` — filter by category (core, daily, conversation, custom)
|
||||
- `--limit <n>` — limit results
|
||||
- `--key <prefix>` — key prefix for clear operations
|
||||
- `--yes` — skip confirmation (required for clear)
|
||||
|
||||
---
|
||||
|
||||
## Cron
|
||||
|
||||
```bash
|
||||
zeroclaw cron list # List all jobs
|
||||
zeroclaw cron add '0 9 * * 1-5' 'Good morning' --tz America/New_York # Recurring (cron expr)
|
||||
zeroclaw cron add-at '2026-03-11T10:00:00Z' 'Remind me about meeting' # One-time at specific time
|
||||
zeroclaw cron add-every 3600000 'Check server health' # Interval in milliseconds
|
||||
zeroclaw cron once 30m 'Follow up on that task' # Delay from now
|
||||
zeroclaw cron pause <id> # Pause job
|
||||
zeroclaw cron resume <id> # Resume job
|
||||
zeroclaw cron remove <id> # Delete job
|
||||
```
|
||||
|
||||
**Subcommands:**
|
||||
- `add <cron-expr> <command>` — standard cron expression (5-field)
|
||||
- `add-at <iso-datetime> <command>` — fire once at exact time
|
||||
- `add-every <ms> <command>` — repeating interval
|
||||
- `once <duration> <command>` — delay from now (e.g., `30m`, `2h`, `1d`)
|
||||
|
||||
---
|
||||
|
||||
## Providers & Models
|
||||
|
||||
```bash
|
||||
zeroclaw providers # List all 40+ supported providers
|
||||
zeroclaw models list # Show cached model catalog
|
||||
zeroclaw models refresh --all # Refresh catalogs from all providers
|
||||
zeroclaw models set anthropic/claude-sonnet-4-6 # Set default model
|
||||
zeroclaw models status # Current model info
|
||||
```
|
||||
|
||||
Model routing in config.toml:
|
||||
```toml
|
||||
[[model_routes]]
|
||||
hint = "reasoning"
|
||||
provider = "openrouter"
|
||||
model = "anthropic/claude-sonnet-4-6"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Gateway & Daemon
|
||||
|
||||
```bash
|
||||
zeroclaw gateway # Start HTTP gateway (foreground)
|
||||
zeroclaw gateway -p 8080 --host 127.0.0.1 # Custom port/host
|
||||
|
||||
zeroclaw daemon # Gateway + channels + scheduler + heartbeat
|
||||
zeroclaw daemon -p 8080 --host 0.0.0.0 # Custom bind
|
||||
```
|
||||
|
||||
**Gateway defaults:**
|
||||
- Port: 42617
|
||||
- Host: 127.0.0.1
|
||||
- Pairing required: true
|
||||
- Public bind allowed: false
|
||||
|
||||
---
|
||||
|
||||
## Service Management
|
||||
|
||||
OS service lifecycle (systemd on Linux, launchd on macOS).
|
||||
|
||||
```bash
|
||||
zeroclaw service install # Install as system service
|
||||
zeroclaw service start # Start the service
|
||||
zeroclaw service status # Check service status
|
||||
zeroclaw service stop # Stop the service
|
||||
zeroclaw service restart # Restart the service
|
||||
zeroclaw service uninstall # Remove the service
|
||||
```
|
||||
|
||||
**Logs:**
|
||||
- macOS: `~/.zeroclaw/logs/daemon.stdout.log`
|
||||
- Linux: `journalctl -u zeroclaw`
|
||||
|
||||
---
|
||||
|
||||
## Channels
|
||||
|
||||
Channels are configured in `config.toml` under `[channels]` and `[channels_config.*]`.
|
||||
|
||||
```bash
|
||||
zeroclaw channels list # List configured channels
|
||||
zeroclaw channels doctor # Check channel health
|
||||
```
|
||||
|
||||
Supported channels (21 total): Telegram, Discord, Slack, WhatsApp (Meta), WATI, Linq (iMessage/RCS/SMS), Email (IMAP/SMTP), IRC, Matrix, Nostr, Signal, Nextcloud Talk, and more.
|
||||
|
||||
Channel config example (Telegram):
|
||||
```toml
|
||||
[channels]
|
||||
telegram = true
|
||||
|
||||
[channels_config.telegram]
|
||||
bot_token = "..."
|
||||
allowed_users = [123456789]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Security & Emergency Stop
|
||||
|
||||
```bash
|
||||
zeroclaw estop --level kill-all # Stop everything
|
||||
zeroclaw estop --level network-kill # Block all network access
|
||||
zeroclaw estop --level domain-block --domain "*.example.com" # Block specific domains
|
||||
zeroclaw estop --level tool-freeze --tool shell # Freeze specific tool
|
||||
zeroclaw estop status # Check estop state
|
||||
zeroclaw estop resume --network # Resume (may require OTP)
|
||||
```
|
||||
|
||||
**Estop levels:**
|
||||
- `kill-all` — nuclear option, stops all agent activity
|
||||
- `network-kill` — blocks all outbound network
|
||||
- `domain-block` — blocks specific domain patterns
|
||||
- `tool-freeze` — freezes individual tools
|
||||
|
||||
Autonomy config in config.toml:
|
||||
```toml
|
||||
[autonomy]
|
||||
level = "supervised" # read_only | supervised | full
|
||||
workspace_only = true
|
||||
allowed_commands = ["git", "cargo", "python"]
|
||||
forbidden_paths = ["/etc", "/root", "~/.ssh"]
|
||||
max_actions_per_hour = 20
|
||||
max_cost_per_day_cents = 500
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hardware Peripherals
|
||||
|
||||
```bash
|
||||
zeroclaw hardware discover # Find USB devices
|
||||
zeroclaw hardware introspect /dev/ttyACM0 # Probe device capabilities
|
||||
zeroclaw peripheral list # List configured peripherals
|
||||
zeroclaw peripheral add nucleo-f401re /dev/ttyACM0 # Add peripheral
|
||||
zeroclaw peripheral flash-nucleo # Flash STM32 firmware
|
||||
zeroclaw peripheral flash --port /dev/cu.usbmodem101 # Flash Arduino firmware
|
||||
```
|
||||
|
||||
**Supported boards:** STM32 Nucleo-F401RE, Arduino Uno R4, Raspberry Pi GPIO, ESP32.
|
||||
|
||||
Attach to agent session: `zeroclaw agent --peripheral nucleo-f401re:/dev/ttyACM0`
|
||||
|
||||
---
|
||||
|
||||
## Skills
|
||||
|
||||
```bash
|
||||
zeroclaw skills list # List installed skills
|
||||
zeroclaw skills install <path-or-url> # Install a skill
|
||||
zeroclaw skills audit # Audit installed skills
|
||||
zeroclaw skills remove <name> # Remove a skill
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Shell Completions
|
||||
|
||||
```bash
|
||||
zeroclaw completions zsh # Generate Zsh completions
|
||||
zeroclaw completions bash # Generate Bash completions
|
||||
zeroclaw completions fish # Generate Fish completions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Config File
|
||||
|
||||
Default location: `~/.zeroclaw/config.toml`
|
||||
|
||||
Config resolution order (first match wins):
|
||||
1. `ZEROCLAW_CONFIG_DIR` environment variable
|
||||
2. `ZEROCLAW_WORKSPACE` environment variable
|
||||
3. `~/.zeroclaw/active_workspace.toml` marker file
|
||||
4. `~/.zeroclaw/config.toml` (default)
|
||||
@@ -0,0 +1,505 @@
|
||||
# ZeroClaw REST API Reference
|
||||
|
||||
Complete endpoint reference for the ZeroClaw gateway HTTP API.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Authentication](#authentication)
|
||||
2. [Public Endpoints](#public-endpoints)
|
||||
3. [Webhook](#webhook)
|
||||
4. [WebSocket Chat](#websocket-chat)
|
||||
5. [Status & Health](#status--health)
|
||||
6. [Memory](#memory)
|
||||
7. [Cron](#cron)
|
||||
8. [Tools](#tools)
|
||||
9. [Configuration](#configuration)
|
||||
10. [Integrations](#integrations)
|
||||
11. [Cost](#cost)
|
||||
12. [Events (SSE)](#events-sse)
|
||||
13. [Channel Webhooks](#channel-webhooks)
|
||||
14. [Rate Limiting](#rate-limiting)
|
||||
15. [Error Responses](#error-responses)
|
||||
|
||||
---
|
||||
|
||||
## Authentication
|
||||
|
||||
Three authentication mechanisms:
|
||||
|
||||
### Bearer Token (Primary)
|
||||
```
|
||||
Authorization: Bearer <token>
|
||||
```
|
||||
Obtained via `POST /pair`. Required for all `/api/*` endpoints when `require_pairing = true` (default).
|
||||
|
||||
### Webhook Secret
|
||||
```
|
||||
X-Webhook-Secret: <raw_secret>
|
||||
```
|
||||
Optional additional auth for `/webhook`. Server SHA-256 hashes and compares using constant-time comparison.
|
||||
|
||||
### WebSocket Token
|
||||
```
|
||||
ws://host:port/ws/chat?token=<bearer_token>
|
||||
```
|
||||
WebSocket connections pass the token as a query parameter (browsers can't set custom headers on WS handshake).
|
||||
|
||||
---
|
||||
|
||||
## Public Endpoints
|
||||
|
||||
### GET /health
|
||||
No authentication required.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"paired": true,
|
||||
"require_pairing": true,
|
||||
"runtime": {}
|
||||
}
|
||||
```
|
||||
|
||||
### GET /metrics
|
||||
Prometheus text exposition format.
|
||||
|
||||
**Response 200:**
|
||||
```
|
||||
Content-Type: text/plain; version=0.0.4; charset=utf-8
|
||||
```
|
||||
|
||||
### POST /pair
|
||||
Exchange a one-time pairing code for a bearer token.
|
||||
|
||||
**Rate Limit:** Configurable per-minute limit per IP (default: 10/min).
|
||||
|
||||
**Headers:**
|
||||
- `X-Pairing-Code: <code>` (required)
|
||||
|
||||
**Response 200 (success):**
|
||||
```json
|
||||
{
|
||||
"paired": true,
|
||||
"persisted": true,
|
||||
"token": "<bearer_token>",
|
||||
"message": "Save this token — use it as Authorization: Bearer <token>"
|
||||
}
|
||||
```
|
||||
|
||||
**Response 200 (persistence failure):**
|
||||
```json
|
||||
{
|
||||
"paired": true,
|
||||
"persisted": false,
|
||||
"token": "<bearer_token>",
|
||||
"message": "Paired for this process, but failed to persist token to config.toml..."
|
||||
}
|
||||
```
|
||||
|
||||
**Response 403:**
|
||||
```json
|
||||
{"error": "Invalid pairing code"}
|
||||
```
|
||||
|
||||
**Response 429:**
|
||||
```json
|
||||
{"error": "Too many pairing requests. Please retry later.", "retry_after": 60}
|
||||
```
|
||||
|
||||
**Response 429 (lockout):**
|
||||
```json
|
||||
{"error": "Too many failed attempts. Try again in {lockout_secs}s.", "retry_after": 120}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Webhook
|
||||
|
||||
### POST /webhook
|
||||
Send a message to the agent and receive a response.
|
||||
|
||||
**Rate Limit:** Configurable per-minute limit per IP (default: 60/min).
|
||||
|
||||
**Headers:**
|
||||
- `Authorization: Bearer <token>` (if pairing enabled)
|
||||
- `Content-Type: application/json`
|
||||
- `X-Webhook-Secret: <secret>` (optional)
|
||||
- `X-Idempotency-Key: <uuid>` (optional)
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{"message": "your prompt here"}
|
||||
```
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"response": "<llm_response>", "model": "<model_name>"}
|
||||
```
|
||||
|
||||
**Response 200 (duplicate — idempotency key match):**
|
||||
```json
|
||||
{"status": "duplicate", "idempotent": true, "message": "Request already processed for this idempotency key"}
|
||||
```
|
||||
|
||||
**Response 401:**
|
||||
```json
|
||||
{"error": "Unauthorized — pair first via POST /pair, then send Authorization: Bearer <token>"}
|
||||
```
|
||||
|
||||
**Response 429:**
|
||||
```json
|
||||
{"error": "Too many webhook requests. Please retry later.", "retry_after": 60}
|
||||
```
|
||||
|
||||
**Response 500:**
|
||||
```json
|
||||
{"error": "LLM request failed"}
|
||||
```
|
||||
|
||||
### Idempotency
|
||||
- Header: `X-Idempotency-Key: <uuid>`
|
||||
- TTL: configurable, default 300 seconds
|
||||
- Max tracked keys: configurable, default 10,000
|
||||
- Duplicate requests within TTL return `"status": "duplicate"` instead of re-processing
|
||||
|
||||
---
|
||||
|
||||
## WebSocket Chat
|
||||
|
||||
### GET /ws/chat?token=<bearer_token>
|
||||
Streaming agent chat over WebSocket.
|
||||
|
||||
**Client → Server:**
|
||||
```json
|
||||
{"type": "message", "content": "Hello, what's the weather?"}
|
||||
```
|
||||
|
||||
**Server → Client (complete response):**
|
||||
```json
|
||||
{"type": "done", "full_response": "The weather in San Francisco is sunny..."}
|
||||
```
|
||||
|
||||
**Server → Client (error):**
|
||||
```json
|
||||
{"type": "error", "message": "Error message here"}
|
||||
```
|
||||
|
||||
Ignore unknown message types. Invalid JSON triggers an error response.
|
||||
|
||||
---
|
||||
|
||||
## Status & Health
|
||||
|
||||
### GET /api/status
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-sonnet-4",
|
||||
"temperature": 0.7,
|
||||
"uptime_seconds": 3600,
|
||||
"gateway_port": 42617,
|
||||
"locale": "en",
|
||||
"memory_backend": "sqlite",
|
||||
"paired": true,
|
||||
"channels": {
|
||||
"telegram": false,
|
||||
"discord": true,
|
||||
"slack": false
|
||||
},
|
||||
"health": {}
|
||||
}
|
||||
```
|
||||
|
||||
### GET /api/health
|
||||
Component health snapshot (requires auth).
|
||||
```json
|
||||
{"health": {}}
|
||||
```
|
||||
|
||||
### GET or POST /api/doctor
|
||||
Run system diagnostics.
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{"name": "provider_connectivity", "severity": "ok", "message": "OpenRouter API reachable"}
|
||||
],
|
||||
"summary": {"ok": 5, "warnings": 1, "errors": 0}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Memory
|
||||
|
||||
### GET /api/memory
|
||||
List or search memory entries.
|
||||
|
||||
**Query Parameters:**
|
||||
- `query` (string, optional) — search text; triggers search mode
|
||||
- `category` (string, optional) — filter by category
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"key": "memory_key",
|
||||
"content": "memory content",
|
||||
"category": "core",
|
||||
"timestamp": "2025-01-10T12:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### POST /api/memory
|
||||
Store a memory entry.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"key": "unique_key",
|
||||
"content": "memory content",
|
||||
"category": "core"
|
||||
}
|
||||
```
|
||||
Category defaults to `"core"` if omitted. Other values: `daily`, `conversation`, or any custom string.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"status": "ok"}
|
||||
```
|
||||
|
||||
### DELETE /api/memory/{key}
|
||||
Delete a memory entry.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"status": "ok", "deleted": true}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cron
|
||||
|
||||
### GET /api/cron
|
||||
List all scheduled jobs.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"jobs": [
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"name": "daily-backup",
|
||||
"command": "backup.sh",
|
||||
"next_run": "2025-01-10T15:00:00Z",
|
||||
"last_run": "2025-01-09T15:00:00Z",
|
||||
"last_status": "success",
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### POST /api/cron
|
||||
Add a new job.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "job-name",
|
||||
"schedule": "0 9 * * *",
|
||||
"command": "command to run"
|
||||
}
|
||||
```
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"job": {"id": "<uuid>", "name": "job-name", "command": "command to run", "enabled": true}
|
||||
}
|
||||
```
|
||||
|
||||
### DELETE /api/cron/{id}
|
||||
Remove a job.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"status": "ok"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tools
|
||||
|
||||
### GET /api/tools
|
||||
List all registered tools with descriptions and parameter schemas.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"tools": [
|
||||
{"name": "shell", "description": "Execute shell commands", "parameters": {}},
|
||||
{"name": "file_read", "description": "Read file contents", "parameters": {}}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### GET /api/config
|
||||
Get current config. Secrets are masked as `***MASKED***`.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"format": "toml", "content": "<toml_string>"}
|
||||
```
|
||||
|
||||
### PUT /api/config
|
||||
Update config from TOML body. Body limit: 1 MB.
|
||||
|
||||
**Request Body:** Raw TOML text.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{"status": "ok"}
|
||||
```
|
||||
|
||||
**Response 400:**
|
||||
```json
|
||||
{"error": "Invalid TOML: <details>"}
|
||||
```
|
||||
or
|
||||
```json
|
||||
{"error": "Invalid config: <validation_error>"}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integrations
|
||||
|
||||
### GET /api/integrations
|
||||
List all integrations and their status.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"integrations": [
|
||||
{"name": "openrouter", "description": "OpenRouter LLM provider", "category": "providers", "status": "ok"},
|
||||
{"name": "telegram", "description": "Telegram messaging channel", "category": "channels", "status": "configured"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cost
|
||||
|
||||
### GET /api/cost
|
||||
Cost tracking summary.
|
||||
|
||||
**Response 200:**
|
||||
```json
|
||||
{
|
||||
"cost": {
|
||||
"session_cost_usd": 1.50,
|
||||
"daily_cost_usd": 5.00,
|
||||
"monthly_cost_usd": 150.00,
|
||||
"total_tokens": 50000,
|
||||
"request_count": 25,
|
||||
"by_model": {"anthropic/claude-sonnet-4": 1.50}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Events (SSE)
|
||||
|
||||
### GET /api/events
|
||||
Server-Sent Events stream. Requires bearer token.
|
||||
|
||||
**Content-Type:** `text/event-stream`
|
||||
|
||||
**Event types:**
|
||||
|
||||
| Type | Fields | Description |
|
||||
|------|--------|-------------|
|
||||
| `llm_request` | provider, model, timestamp | LLM call started |
|
||||
| `tool_call_start` | tool, timestamp | Tool execution started |
|
||||
| `tool_call` | tool, duration_ms, success, timestamp | Tool execution completed |
|
||||
| `agent_start` | provider, model, timestamp | Agent loop started |
|
||||
| `agent_end` | provider, model, duration_ms, tokens_used, cost_usd, timestamp | Agent loop completed |
|
||||
| `error` | component, message, timestamp | Error occurred |
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -N -H "Authorization: Bearer <token>" http://127.0.0.1:42617/api/events
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Channel Webhooks
|
||||
|
||||
These are incoming webhook endpoints for specific messaging channels. They're set up automatically when channels are configured.
|
||||
|
||||
### WhatsApp (Meta Cloud API)
|
||||
- `GET /whatsapp` — verification (echoes `hub.challenge`)
|
||||
- `POST /whatsapp` — incoming messages (signature verified via `X-Hub-Signature-256`)
|
||||
|
||||
### WATI (WhatsApp Business)
|
||||
- `GET /wati` — verification (echoes `challenge`)
|
||||
- `POST /wati` — incoming messages
|
||||
|
||||
### Linq (iMessage/RCS/SMS)
|
||||
- `POST /linq` — incoming messages (signature verified via `X-Webhook-Signature` + `X-Webhook-Timestamp`)
|
||||
|
||||
### Nextcloud Talk
|
||||
- `POST /nextcloud-talk` — bot API webhook (signature verified via `X-Nextcloud-Talk-Signature`)
|
||||
|
||||
---
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
Sliding window (60-second window), per client IP.
|
||||
|
||||
| Endpoint | Default Limit |
|
||||
|----------|--------------|
|
||||
| `POST /pair` | 10/min |
|
||||
| `POST /webhook` | 60/min |
|
||||
|
||||
If `trust_forwarded_headers` is enabled, uses `X-Forwarded-For` for client IP.
|
||||
|
||||
Max tracked keys: configurable (default: 10,000).
|
||||
|
||||
---
|
||||
|
||||
## Error Responses
|
||||
|
||||
**Standard format:**
|
||||
```json
|
||||
{"error": "Human-readable error message"}
|
||||
```
|
||||
|
||||
**With retry info:**
|
||||
```json
|
||||
{"error": "...", "retry_after": 60}
|
||||
```
|
||||
|
||||
**Status codes:**
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 200 | Success |
|
||||
| 400 | Invalid JSON, missing fields, invalid TOML |
|
||||
| 401 | Invalid/missing bearer token or webhook secret |
|
||||
| 403 | Pairing verification failed |
|
||||
| 404 | Endpoint or channel not configured |
|
||||
| 408 | Request timeout (30s) |
|
||||
| 429 | Rate limited (check `retry_after`) |
|
||||
| 500 | LLM error, database error, internal failure |
|
||||
+3
-7
@@ -20,16 +20,12 @@ reviews:
|
||||
enabled: true
|
||||
# Only review PRs targeting these branches
|
||||
base_branches:
|
||||
- main
|
||||
- develop
|
||||
- master
|
||||
# Skip reviews for draft PRs or WIP
|
||||
drafts: false
|
||||
# Enable base branch analysis
|
||||
base_branch_analysis: true
|
||||
|
||||
# Poem configuration
|
||||
poem:
|
||||
enabled: false
|
||||
# Poem feature toggle (must be a boolean, not an object)
|
||||
poem: false
|
||||
|
||||
# Reviewer suggestions
|
||||
reviewer:
|
||||
|
||||
@@ -1,25 +1,3 @@
|
||||
# EditorConfig — https://editorconfig.org
|
||||
# Provides consistent formatting defaults across editors and platforms.
|
||||
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
indent_style = space
|
||||
indent_size = 4
|
||||
|
||||
[*.md]
|
||||
# Trailing whitespace is significant in Markdown (line breaks).
|
||||
trim_trailing_whitespace = false
|
||||
|
||||
[*.{yml,yaml}]
|
||||
indent_size = 2
|
||||
|
||||
[*.toml]
|
||||
indent_size = 2
|
||||
|
||||
[Dockerfile]
|
||||
indent_size = 4
|
||||
|
||||
@@ -59,6 +59,7 @@ PROVIDER=openrouter
|
||||
# ZAI_API_KEY=...
|
||||
# SYNTHETIC_API_KEY=...
|
||||
# OPENCODE_API_KEY=...
|
||||
# OPENCODE_GO_API_KEY=...
|
||||
# VERCEL_API_KEY=...
|
||||
# CLOUDFLARE_API_KEY=...
|
||||
|
||||
|
||||
@@ -1,33 +1 @@
|
||||
# Normalize all text files
|
||||
* text=auto
|
||||
|
||||
# Force LF for scripts and build-critical files
|
||||
*.sh text eol=lf
|
||||
Dockerfile* text eol=lf
|
||||
*.rs text eol=lf
|
||||
*.toml text eol=lf
|
||||
*.yml text eol=lf
|
||||
*.yaml text eol=lf
|
||||
|
||||
# CI
|
||||
.github/**/* text eol=lf
|
||||
|
||||
# Images
|
||||
*.png binary
|
||||
*.jpg binary
|
||||
*.jpeg binary
|
||||
*.gif binary
|
||||
*.ico binary
|
||||
|
||||
# Archives
|
||||
*.zip binary
|
||||
*.tar binary
|
||||
*.tgz binary
|
||||
*.gz binary
|
||||
*.7z binary
|
||||
|
||||
# Compiled artifacts
|
||||
*.so binary
|
||||
*.dll binary
|
||||
*.exe binary
|
||||
*.a binary
|
||||
|
||||
+27
-23
@@ -1,28 +1,32 @@
|
||||
# Default owner for all files
|
||||
* @theonlyhennygod
|
||||
* @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
|
||||
# High-risk surfaces
|
||||
/src/security/** @willsarg
|
||||
/src/runtime/** @theonlyhennygod
|
||||
/src/memory/** @theonlyhennygod @chumyin
|
||||
/.github/** @theonlyhennygod
|
||||
/Cargo.toml @theonlyhennygod
|
||||
/Cargo.lock @theonlyhennygod
|
||||
# Important functional modules
|
||||
/src/agent/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/providers/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/channels/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/tools/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/gateway/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/runtime/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/src/memory/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/Cargo.toml @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/Cargo.lock @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
|
||||
# CI
|
||||
/.github/workflows/** @theonlyhennygod @willsarg
|
||||
/.github/codeql/** @willsarg
|
||||
/.github/dependabot.yml @willsarg
|
||||
# Security / tests / CI-CD ownership
|
||||
/src/security/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/tests/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/.github/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/.github/workflows/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/.github/codeql/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/.github/dependabot.yml @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/SECURITY.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/docs/actions-source-policy.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/docs/ci-map.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
|
||||
# Docs & governance
|
||||
/docs/** @chumyin
|
||||
/AGENTS.md @chumyin
|
||||
/CLAUDE.md @chumyin
|
||||
/CONTRIBUTING.md @chumyin
|
||||
/docs/pr-workflow.md @chumyin
|
||||
/docs/reviewer-playbook.md @chumyin
|
||||
|
||||
# Security / CI-CD governance overrides (last-match wins)
|
||||
/SECURITY.md @willsarg
|
||||
/docs/actions-source-policy.md @willsarg
|
||||
/docs/ci-map.md @willsarg
|
||||
/docs/** @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/AGENTS.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/CLAUDE.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/CONTRIBUTING.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/docs/pr-workflow.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
/docs/reviewer-playbook.md @theonlyhennygod @JordanTheJet @SimianAstronaut7
|
||||
|
||||
@@ -11,15 +11,6 @@ body:
|
||||
Please provide a minimal reproducible case so maintainers can triage quickly.
|
||||
Do not include personal/sensitive data; redact and anonymize all logs/payloads.
|
||||
|
||||
- type: input
|
||||
id: summary
|
||||
attributes:
|
||||
label: Summary
|
||||
description: One-line description of the problem.
|
||||
placeholder: zeroclaw daemon exits immediately when ...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: component
|
||||
attributes:
|
||||
@@ -83,13 +74,13 @@ body:
|
||||
id: impact
|
||||
attributes:
|
||||
label: Impact
|
||||
description: Who is affected, how often, and practical consequences.
|
||||
description: Who is affected, how often, and practical consequences (optional but helps triage).
|
||||
placeholder: |
|
||||
Affected users: ...
|
||||
Frequency: always/intermittent
|
||||
Consequence: ...
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: logs
|
||||
@@ -112,9 +103,10 @@ body:
|
||||
id: rust
|
||||
attributes:
|
||||
label: Rust version
|
||||
description: Required for runtime/build bugs; optional for docs/config issues.
|
||||
placeholder: rustc 1.xx.x
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: input
|
||||
id: os
|
||||
@@ -140,9 +132,7 @@ body:
|
||||
attributes:
|
||||
label: Pre-flight checks
|
||||
options:
|
||||
- label: I reproduced this on the latest main branch or latest release.
|
||||
- label: I reproduced this on the latest master branch or latest release.
|
||||
required: true
|
||||
- label: I redacted secrets/tokens from logs.
|
||||
required: true
|
||||
- label: I removed personal identifiers and replaced identity-specific data with neutral placeholders.
|
||||
- label: I redacted secrets, tokens, and personal data from all submitted content.
|
||||
required: true
|
||||
|
||||
@@ -4,8 +4,8 @@ contact_links:
|
||||
url: https://github.com/zeroclaw-labs/zeroclaw/security/policy
|
||||
about: Please report security vulnerabilities privately via SECURITY.md policy.
|
||||
- name: Contribution guide
|
||||
url: https://github.com/zeroclaw-labs/zeroclaw/blob/main/CONTRIBUTING.md
|
||||
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/CONTRIBUTING.md
|
||||
about: Please read contribution and PR requirements before opening an issue.
|
||||
- name: PR workflow & reviewer expectations
|
||||
url: https://github.com/zeroclaw-labs/zeroclaw/blob/main/docs/pr-workflow.md
|
||||
url: https://github.com/zeroclaw-labs/zeroclaw/blob/master/docs/pr-workflow.md
|
||||
about: Read risk-based PR tracks, CI gates, and merge criteria before filing feature requests.
|
||||
|
||||
@@ -42,10 +42,10 @@ body:
|
||||
id: non_goals
|
||||
attributes:
|
||||
label: Non-goals / out of scope
|
||||
description: Clarify what should not be included in the first iteration.
|
||||
description: Clarify what should not be included in the first iteration (optional but helps scope discussion).
|
||||
placeholder: No UI changes, no cross-provider dynamic adaptation in v1.
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
@@ -60,31 +60,31 @@ body:
|
||||
id: acceptance
|
||||
attributes:
|
||||
label: Acceptance criteria
|
||||
description: What outcomes would make this request complete?
|
||||
description: What outcomes would make this request complete? (optional — can be defined during triage)
|
||||
placeholder: |
|
||||
- Config key is documented and validated
|
||||
- Runtime path uses configured retry budget
|
||||
- Regression tests cover fallback and invalid config
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: architecture
|
||||
attributes:
|
||||
label: Architecture impact
|
||||
description: Which subsystem(s) are affected?
|
||||
description: Which subsystem(s) are affected? (optional — maintainers will assess during triage)
|
||||
placeholder: providers/, channels/, memory/, runtime/, security/, docs/ ...
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: risk
|
||||
attributes:
|
||||
label: Risk and rollback
|
||||
description: Main risk + how to disable/revert quickly.
|
||||
description: Main risk + how to disable/revert quickly (optional — can be defined during planning).
|
||||
placeholder: Risk is ... rollback is ...
|
||||
validations:
|
||||
required: true
|
||||
required: false
|
||||
|
||||
- type: dropdown
|
||||
id: breaking
|
||||
|
||||
@@ -5,7 +5,7 @@ updates:
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: daily
|
||||
target-branch: dev
|
||||
target-branch: master
|
||||
open-pull-requests-limit: 3
|
||||
labels:
|
||||
- "dependencies"
|
||||
@@ -21,7 +21,7 @@ updates:
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: daily
|
||||
target-branch: dev
|
||||
target-branch: master
|
||||
open-pull-requests-limit: 1
|
||||
labels:
|
||||
- "ci"
|
||||
@@ -38,7 +38,7 @@ updates:
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: daily
|
||||
target-branch: dev
|
||||
target-branch: master
|
||||
open-pull-requests-limit: 1
|
||||
labels:
|
||||
- "ci"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Describe this PR in 2-5 bullets:
|
||||
|
||||
- Base branch target (`dev` for normal contributions; `main` only for `dev` promotion):
|
||||
- Base branch target (`master` for all contributions):
|
||||
- Problem:
|
||||
- Why it matters:
|
||||
- What changed:
|
||||
|
||||
@@ -10,21 +10,8 @@ Subdirectories are not valid locations for workflow entry files.
|
||||
Repository convention:
|
||||
|
||||
1. Keep runnable workflow entry files at `.github/workflows/` root.
|
||||
2. Keep workflow-only helper scripts under `.github/workflows/scripts/`.
|
||||
3. Keep cross-tooling/local CI scripts under `scripts/ci/` when they are used outside Actions.
|
||||
2. Keep cross-tooling/local CI scripts under `dev/` or `scripts/ci/` when used outside Actions.
|
||||
|
||||
Workflow behavior documentation in this directory:
|
||||
|
||||
- `.github/workflows/main-branch-flow.md`
|
||||
|
||||
Current workflow helper scripts:
|
||||
|
||||
- `.github/workflows/scripts/ci_workflow_owner_approval.js`
|
||||
- `.github/workflows/scripts/ci_license_file_owner_guard.js`
|
||||
- `.github/workflows/scripts/lint_feedback.js`
|
||||
- `.github/workflows/scripts/pr_auto_response_contributor_tier.js`
|
||||
- `.github/workflows/scripts/pr_auto_response_labeled_routes.js`
|
||||
- `.github/workflows/scripts/pr_check_status_nudge.js`
|
||||
- `.github/workflows/scripts/pr_intake_checks.js`
|
||||
- `.github/workflows/scripts/pr_labeler.js`
|
||||
- `.github/workflows/scripts/test_benchmarks_pr_comment.js`
|
||||
- `.github/workflows/master-branch-flow.md`
|
||||
|
||||
@@ -0,0 +1,171 @@
|
||||
name: Quality Gate
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches: [master]
|
||||
|
||||
concurrency:
|
||||
group: checks-${{ github.event.pull_request.number }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: Lint
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
components: rustfmt, clippy
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
- name: Check formatting
|
||||
run: cargo fmt --all -- --check
|
||||
|
||||
- name: Clippy
|
||||
run: cargo clippy --all-targets -- -D warnings
|
||||
|
||||
test:
|
||||
name: Test
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
- name: Install mold linker
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y mold
|
||||
|
||||
- name: Install cargo-nextest
|
||||
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
|
||||
|
||||
- name: Run tests
|
||||
run: cargo nextest run --locked
|
||||
env:
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: x86_64-unknown-linux-gnu
|
||||
- os: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
- name: Install mold linker
|
||||
if: runner.os == 'Linux'
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y mold
|
||||
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
- name: Build release
|
||||
shell: bash
|
||||
run: cargo build --release --locked --target ${{ matrix.target }}
|
||||
env:
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
|
||||
|
||||
security:
|
||||
name: Security Audit
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
- name: Install cargo-audit
|
||||
run: cargo install cargo-audit --locked
|
||||
|
||||
- name: Install cargo-deny
|
||||
run: cargo install cargo-deny --locked
|
||||
|
||||
- name: Audit dependencies
|
||||
run: cargo audit
|
||||
|
||||
- name: Check licenses and sources
|
||||
run: cargo deny check licenses sources
|
||||
|
||||
check-32bit:
|
||||
name: "Check (32-bit)"
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: i686-unknown-linux-gnu
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
- name: Install 32-bit libs
|
||||
run: sudo apt-get update && sudo apt-get install -y gcc-multilib
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
- name: Cargo check (32-bit, no default features)
|
||||
run: cargo check --target i686-unknown-linux-gnu --no-default-features
|
||||
|
||||
# Composite status check — branch protection only needs to require this
|
||||
# single job instead of tracking every matrix leg individually.
|
||||
gate:
|
||||
name: CI Required Gate
|
||||
if: always()
|
||||
needs: [lint, test, build, security, check-32bit]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check upstream job results
|
||||
run: |
|
||||
if [[ "${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}" == "true" ]]; then
|
||||
echo "::error::One or more upstream jobs failed or were cancelled"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
security-gate:
|
||||
name: Security Required Gate
|
||||
if: always()
|
||||
needs: [security]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check security job result
|
||||
run: |
|
||||
if [[ "${{ needs.security.result }}" != "success" ]]; then
|
||||
echo "::error::Security audit failed or was cancelled"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,61 +0,0 @@
|
||||
name: CI Build (Fast)
|
||||
|
||||
# Optional fast release build that runs alongside the normal Build (Smoke) job.
|
||||
# This workflow is informational and does not gate merges.
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [dev, main]
|
||||
pull_request:
|
||||
branches: [dev, main]
|
||||
|
||||
concurrency:
|
||||
group: ci-fast-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
changes:
|
||||
name: Detect Change Scope
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
outputs:
|
||||
rust_changed: ${{ steps.scope.outputs.rust_changed }}
|
||||
docs_only: ${{ steps.scope.outputs.docs_only }}
|
||||
workflow_changed: ${{ steps.scope.outputs.workflow_changed }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Detect docs-only changes
|
||||
id: scope
|
||||
shell: bash
|
||||
env:
|
||||
EVENT_NAME: ${{ github.event_name }}
|
||||
BASE_SHA: ${{ github.event_name == 'pull_request' && github.event.pull_request.base.sha || github.event.before }}
|
||||
run: ./scripts/ci/detect_change_scope.sh
|
||||
|
||||
build-fast:
|
||||
name: Build (Fast)
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.rust_changed == 'true' || needs.changes.outputs.workflow_changed == 'true'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 25
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
with:
|
||||
prefix-key: fast-build
|
||||
cache-targets: true
|
||||
|
||||
- name: Build release binary
|
||||
run: cargo build --release --locked --verbose
|
||||
+138
-308
@@ -1,336 +1,166 @@
|
||||
name: CI Run
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [dev, main]
|
||||
pull_request:
|
||||
branches: [dev, main]
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
branches: [master]
|
||||
|
||||
concurrency:
|
||||
group: ci-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
group: ci-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
changes:
|
||||
name: Detect Change Scope
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
outputs:
|
||||
docs_only: ${{ steps.scope.outputs.docs_only }}
|
||||
docs_changed: ${{ steps.scope.outputs.docs_changed }}
|
||||
rust_changed: ${{ steps.scope.outputs.rust_changed }}
|
||||
workflow_changed: ${{ steps.scope.outputs.workflow_changed }}
|
||||
docs_files: ${{ steps.scope.outputs.docs_files }}
|
||||
base_sha: ${{ steps.scope.outputs.base_sha }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
lint:
|
||||
name: Lint
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
components: rustfmt, clippy
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
- name: Detect docs-only changes
|
||||
id: scope
|
||||
shell: bash
|
||||
env:
|
||||
EVENT_NAME: ${{ github.event_name }}
|
||||
BASE_SHA: ${{ github.event_name == 'pull_request' && github.event.pull_request.base.sha || github.event.before }}
|
||||
run: ./scripts/ci/detect_change_scope.sh
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
lint:
|
||||
name: Lint Gate (Format + Clippy + Strict Delta)
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 25
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
components: rustfmt, clippy
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
- name: Run rust quality gate
|
||||
run: ./scripts/ci/rust_quality_gate.sh
|
||||
- name: Run strict lint delta gate
|
||||
env:
|
||||
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
|
||||
run: ./scripts/ci/rust_strict_delta_gate.sh
|
||||
- name: Check formatting
|
||||
run: cargo fmt --all -- --check
|
||||
|
||||
test:
|
||||
name: Test
|
||||
needs: [changes, lint]
|
||||
if: needs.changes.outputs.rust_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full')) && needs.lint.result == 'success'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
- name: Run tests
|
||||
run: cargo test --locked --verbose
|
||||
- name: Clippy
|
||||
run: cargo clippy --all-targets -- -D warnings
|
||||
|
||||
build:
|
||||
name: Build (Smoke)
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.rust_changed == 'true'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 20
|
||||
lint-strict-delta:
|
||||
name: Strict Delta Lint
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
components: clippy
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
- name: Build binary (smoke check)
|
||||
run: cargo build --profile release-fast --locked --verbose
|
||||
- name: Check binary size
|
||||
run: bash scripts/ci/check_binary_size.sh target/release-fast/zeroclaw
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
docs-only:
|
||||
name: Docs-Only Fast Path
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.docs_only == 'true'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Skip heavy jobs for docs-only change
|
||||
run: echo "Docs-only change detected. Rust lint/test/build skipped."
|
||||
- name: Run strict delta lint gate
|
||||
run: bash scripts/ci/rust_strict_delta_gate.sh
|
||||
env:
|
||||
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
|
||||
|
||||
non-rust:
|
||||
name: Non-Rust Fast Path
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.docs_only != 'true' && needs.changes.outputs.rust_changed != 'true'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Skip Rust jobs for non-Rust change scope
|
||||
run: echo "No Rust-impacting files changed. Rust lint/test/build skipped."
|
||||
test:
|
||||
name: Test
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
needs: [lint]
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
docs-quality:
|
||||
name: Docs Quality
|
||||
needs: [changes]
|
||||
if: needs.changes.outputs.docs_changed == 'true' && (github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'ci:full'))
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 15
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
- name: Markdown lint (changed lines only)
|
||||
env:
|
||||
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
|
||||
DOCS_FILES: ${{ needs.changes.outputs.docs_files }}
|
||||
run: ./scripts/ci/docs_quality_gate.sh
|
||||
- name: Install mold linker
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y mold
|
||||
|
||||
- name: Collect added links
|
||||
id: collect_links
|
||||
shell: bash
|
||||
env:
|
||||
BASE_SHA: ${{ needs.changes.outputs.base_sha }}
|
||||
DOCS_FILES: ${{ needs.changes.outputs.docs_files }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
python3 ./scripts/ci/collect_changed_links.py \
|
||||
--base "$BASE_SHA" \
|
||||
--docs-files "$DOCS_FILES" \
|
||||
--output .ci-added-links.txt
|
||||
count=$(wc -l < .ci-added-links.txt | tr -d ' ')
|
||||
echo "count=$count" >> "$GITHUB_OUTPUT"
|
||||
if [ "$count" -gt 0 ]; then
|
||||
echo "Added links queued for check:"
|
||||
cat .ci-added-links.txt
|
||||
else
|
||||
echo "No added links found in changed docs lines."
|
||||
fi
|
||||
- name: Install cargo-nextest
|
||||
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
|
||||
|
||||
- name: Link check (offline, added links only)
|
||||
if: steps.collect_links.outputs.count != '0'
|
||||
uses: lycheeverse/lychee-action@a8c4c7cb88f0c7386610c35eb25108e448569cb0 # v2
|
||||
with:
|
||||
fail: true
|
||||
args: >-
|
||||
--offline
|
||||
--no-progress
|
||||
--format detailed
|
||||
.ci-added-links.txt
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Run tests
|
||||
run: cargo nextest run --locked
|
||||
env:
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
|
||||
|
||||
- name: Skip link check (no added links)
|
||||
if: steps.collect_links.outputs.count == '0'
|
||||
run: echo "No added links in changed docs lines. Link check skipped."
|
||||
build:
|
||||
name: Build ${{ matrix.target }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
needs: [lint]
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: x86_64-unknown-linux-gnu
|
||||
- os: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
|
||||
lint-feedback:
|
||||
name: Lint Feedback
|
||||
if: github.event_name == 'pull_request'
|
||||
needs: [changes, lint, docs-quality]
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- name: Install mold linker
|
||||
if: runner.os == 'Linux'
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y mold
|
||||
|
||||
- name: Post actionable lint failure summary
|
||||
if: always()
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
RUST_CHANGED: ${{ needs.changes.outputs.rust_changed }}
|
||||
DOCS_CHANGED: ${{ needs.changes.outputs.docs_changed }}
|
||||
LINT_RESULT: ${{ needs.lint.result }}
|
||||
LINT_DELTA_RESULT: ${{ needs.lint.result }}
|
||||
DOCS_RESULT: ${{ needs.docs-quality.result }}
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/lint_feedback.js');
|
||||
await script({github, context, core});
|
||||
- name: Ensure web/dist placeholder exists
|
||||
run: mkdir -p web/dist && touch web/dist/.gitkeep
|
||||
|
||||
workflow-owner-approval:
|
||||
name: Workflow Owner Approval
|
||||
needs: [changes]
|
||||
if: github.event_name == 'pull_request' && needs.changes.outputs.workflow_changed == 'true'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- name: Build release
|
||||
shell: bash
|
||||
run: cargo build --release --locked --target ${{ matrix.target }}
|
||||
env:
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
|
||||
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
|
||||
|
||||
- name: Require owner approval for workflow file changes
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_OWNER_LOGINS: ${{ vars.WORKFLOW_OWNER_LOGINS }}
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/ci_workflow_owner_approval.js');
|
||||
await script({ github, context, core });
|
||||
docs-quality:
|
||||
name: Docs Quality
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-node@1d0ff469b7ec7b3cb9d8673fde0c81c44821de2a # v4
|
||||
with:
|
||||
node-version: 20
|
||||
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
|
||||
with:
|
||||
python-version: "3.12"
|
||||
|
||||
license-file-owner-guard:
|
||||
name: License File Owner Guard
|
||||
needs: [changes]
|
||||
if: github.event_name == 'pull_request'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- name: Run docs quality gate
|
||||
run: bash scripts/ci/docs_quality_gate.sh
|
||||
env:
|
||||
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
|
||||
|
||||
- name: Enforce owner-only edits for root license files
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/ci_license_file_owner_guard.js');
|
||||
await script({ github, context, core });
|
||||
ci-required:
|
||||
name: CI Required Gate
|
||||
if: always()
|
||||
needs: [changes, lint, test, build, docs-only, non-rust, docs-quality, lint-feedback, workflow-owner-approval, license-file-owner-guard]
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Enforce required status
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
event_name="${{ github.event_name }}"
|
||||
rust_changed="${{ needs.changes.outputs.rust_changed }}"
|
||||
docs_changed="${{ needs.changes.outputs.docs_changed }}"
|
||||
workflow_changed="${{ needs.changes.outputs.workflow_changed }}"
|
||||
docs_result="${{ needs.docs-quality.result }}"
|
||||
workflow_owner_result="${{ needs.workflow-owner-approval.result }}"
|
||||
license_owner_result="${{ needs.license-file-owner-guard.result }}"
|
||||
|
||||
if [ "${{ needs.changes.outputs.docs_only }}" = "true" ]; then
|
||||
echo "workflow_owner_approval=${workflow_owner_result}"
|
||||
echo "license_file_owner_guard=${license_owner_result}"
|
||||
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
|
||||
echo "Workflow files changed but workflow owner approval gate did not pass."
|
||||
exit 1
|
||||
fi
|
||||
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
|
||||
echo "License file owner guard did not pass."
|
||||
exit 1
|
||||
fi
|
||||
if [ "$event_name" != "pull_request" ] && [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
|
||||
echo "Docs-only push changed docs, but docs-quality did not pass."
|
||||
exit 1
|
||||
fi
|
||||
echo "Docs-only fast path passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$rust_changed" != "true" ]; then
|
||||
echo "rust_changed=false (non-rust fast path)"
|
||||
echo "workflow_owner_approval=${workflow_owner_result}"
|
||||
echo "license_file_owner_guard=${license_owner_result}"
|
||||
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
|
||||
echo "Workflow files changed but workflow owner approval gate did not pass."
|
||||
exit 1
|
||||
fi
|
||||
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
|
||||
echo "License file owner guard did not pass."
|
||||
exit 1
|
||||
fi
|
||||
if [ "$event_name" != "pull_request" ] && [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
|
||||
echo "Non-rust push changed docs, but docs-quality did not pass."
|
||||
exit 1
|
||||
fi
|
||||
echo "Non-rust fast path passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
lint_result="${{ needs.lint.result }}"
|
||||
lint_strict_delta_result="${{ needs.lint.result }}"
|
||||
test_result="${{ needs.test.result }}"
|
||||
build_result="${{ needs.build.result }}"
|
||||
|
||||
echo "lint=${lint_result}"
|
||||
echo "lint_strict_delta=${lint_strict_delta_result}"
|
||||
echo "test=${test_result}"
|
||||
echo "build=${build_result}"
|
||||
echo "docs=${docs_result}"
|
||||
echo "workflow_owner_approval=${workflow_owner_result}"
|
||||
echo "license_file_owner_guard=${license_owner_result}"
|
||||
|
||||
if [ "$event_name" = "pull_request" ] && [ "$workflow_changed" = "true" ] && [ "$workflow_owner_result" != "success" ]; then
|
||||
echo "Workflow files changed but workflow owner approval gate did not pass."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$event_name" = "pull_request" ] && [ "$license_owner_result" != "success" ]; then
|
||||
echo "License file owner guard did not pass."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$event_name" = "pull_request" ]; then
|
||||
if [ "$build_result" != "success" ]; then
|
||||
echo "Required PR build job did not pass."
|
||||
exit 1
|
||||
fi
|
||||
echo "PR required checks passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$lint_result" != "success" ] || [ "$lint_strict_delta_result" != "success" ] || [ "$test_result" != "success" ] || [ "$build_result" != "success" ]; then
|
||||
echo "Required push CI jobs did not pass."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$docs_changed" = "true" ] && [ "$docs_result" != "success" ]; then
|
||||
echo "Push changed docs, but docs-quality did not pass."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Push required checks passed."
|
||||
# Composite status check — branch protection requires this single job.
|
||||
gate:
|
||||
name: CI Required Gate
|
||||
if: always()
|
||||
needs: [lint, lint-strict-delta, test, build, docs-quality]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check upstream job results
|
||||
env:
|
||||
HAS_FAILURE: ${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}
|
||||
run: |
|
||||
if [[ "$HAS_FAILURE" == "true" ]]; then
|
||||
echo "::error::One or more upstream jobs failed or were cancelled"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -0,0 +1,77 @@
|
||||
name: Cross-Platform Build
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
web:
|
||||
name: Build Web Dashboard
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
cache: npm
|
||||
cache-dependency-path: web/package-lock.json
|
||||
- name: Build web dashboard
|
||||
run: cd web && npm ci && npm run build
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
retention-days: 1
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.target }}
|
||||
needs: [web]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: aarch64-unknown-linux-gnu
|
||||
cross_compiler: gcc-aarch64-linux-gnu
|
||||
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
|
||||
linker: aarch64-linux-gnu-gcc
|
||||
- os: macos-15-intel
|
||||
target: x86_64-apple-darwin
|
||||
- os: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
if: runner.os != 'Windows'
|
||||
|
||||
- uses: actions/download-artifact@v8
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
|
||||
- name: Install cross compiler
|
||||
if: matrix.cross_compiler
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y ${{ matrix.cross_compiler }}
|
||||
|
||||
- name: Build release
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
|
||||
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
|
||||
fi
|
||||
cargo build --release --locked --target ${{ matrix.target }}
|
||||
@@ -1,57 +0,0 @@
|
||||
name: Feature Matrix
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "30 4 * * 1" # Weekly Monday 4:30am UTC
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: feature-matrix-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
feature-check:
|
||||
name: Check (${{ matrix.name }})
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- name: no-default-features
|
||||
args: --no-default-features
|
||||
install_libudev: false
|
||||
- name: all-features
|
||||
args: --all-features
|
||||
install_libudev: true
|
||||
- name: hardware-only
|
||||
args: --no-default-features --features hardware
|
||||
install_libudev: false
|
||||
- name: browser-native
|
||||
args: --no-default-features --features browser-native
|
||||
install_libudev: false
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
with:
|
||||
key: features-${{ matrix.name }}
|
||||
|
||||
- name: Install Linux system dependencies for all-features
|
||||
if: matrix.install_libudev
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y --no-install-recommends libudev-dev pkg-config
|
||||
|
||||
- name: Check feature combination
|
||||
run: cargo check --locked ${{ matrix.args }}
|
||||
@@ -1,239 +0,0 @@
|
||||
# Main Branch Delivery Flows
|
||||
|
||||
This document explains what runs when code is proposed to `dev`, promoted to `main`, and released.
|
||||
|
||||
Use this with:
|
||||
|
||||
- [`docs/ci-map.md`](../../docs/ci-map.md)
|
||||
- [`docs/pr-workflow.md`](../../docs/pr-workflow.md)
|
||||
- [`docs/release-process.md`](../../docs/release-process.md)
|
||||
|
||||
## Event Summary
|
||||
|
||||
| Event | Main workflows |
|
||||
| --- | --- |
|
||||
| PR activity (`pull_request_target`) | `pr-intake-checks.yml`, `pr-labeler.yml`, `pr-auto-response.yml` |
|
||||
| PR activity (`pull_request`) | `ci-run.yml`, `sec-audit.yml`, `main-promotion-gate.yml` (for `main` PRs), plus path-scoped workflows |
|
||||
| Push to `dev`/`main` | `ci-run.yml`, `sec-audit.yml`, plus path-scoped workflows |
|
||||
| Tag push (`v*`) | `pub-release.yml` publish mode, `pub-docker-img.yml` publish job |
|
||||
| Scheduled/manual | `pub-release.yml` verification mode, `pub-homebrew-core.yml` (manual), `sec-codeql.yml`, `feature-matrix.yml`, `test-fuzz.yml`, `pr-check-stale.yml`, `pr-check-status.yml`, `sync-contributors.yml`, `test-benchmarks.yml`, `test-e2e.yml` |
|
||||
|
||||
## Runtime and Docker Matrix
|
||||
|
||||
Observed averages below are from recent completed runs (sampled from GitHub Actions on February 17, 2026). Values are directional, not SLA.
|
||||
|
||||
| Workflow | Typical trigger in main flow | Avg runtime | Docker build? | Docker run? | Docker push? |
|
||||
| --- | --- | ---:| --- | --- | --- |
|
||||
| `pr-intake-checks.yml` | PR open/update (`pull_request_target`) | 14.5s | No | No | No |
|
||||
| `pr-labeler.yml` | PR open/update (`pull_request_target`) | 53.7s | No | No | No |
|
||||
| `pr-auto-response.yml` | PR/issue automation | 24.3s | No | No | No |
|
||||
| `ci-run.yml` | PR + push to `dev`/`main` | 74.7s | No | No | No |
|
||||
| `sec-audit.yml` | PR + push to `dev`/`main` | 127.2s | No | No | No |
|
||||
| `workflow-sanity.yml` | Workflow-file changes | 34.2s | No | No | No |
|
||||
| `pr-label-policy-check.yml` | Label policy/automation changes | 14.7s | No | No | No |
|
||||
| `pub-docker-img.yml` (`pull_request`) | Docker build-input PR changes | 240.4s | Yes | Yes | No |
|
||||
| `pub-docker-img.yml` (`push`) | tag push `v*` | 139.9s | Yes | No | Yes |
|
||||
| `pub-release.yml` | Tag push `v*` (publish) + manual/scheduled verification (no publish) | N/A in recent sample | No | No | No |
|
||||
| `pub-homebrew-core.yml` | Manual workflow dispatch only | N/A in recent sample | No | No | No |
|
||||
|
||||
Notes:
|
||||
|
||||
1. `pub-docker-img.yml` is the only workflow in the main PR/push path that builds Docker images.
|
||||
2. Container runtime verification (`docker run`) occurs in PR smoke only.
|
||||
3. Container registry push occurs on tag pushes (`v*`) only.
|
||||
4. `ci-run.yml` "Build (Smoke)" builds Rust binaries, not Docker images.
|
||||
|
||||
## Step-By-Step
|
||||
|
||||
### 1) PR from branch in this repository -> `dev`
|
||||
|
||||
1. Contributor opens or updates PR against `dev`.
|
||||
2. `pull_request_target` automation runs (typical runtime):
|
||||
- `pr-intake-checks.yml` posts intake warnings/errors.
|
||||
- `pr-labeler.yml` sets size/risk/scope labels.
|
||||
- `pr-auto-response.yml` runs first-interaction and label routes.
|
||||
3. `pull_request` CI workflows start:
|
||||
- `ci-run.yml`
|
||||
- `sec-audit.yml`
|
||||
- path-scoped workflows if matching files changed:
|
||||
- `pub-docker-img.yml` (Docker build-input paths only)
|
||||
- `workflow-sanity.yml` (workflow files only)
|
||||
- `pr-label-policy-check.yml` (label-policy files only)
|
||||
4. In `ci-run.yml`, `changes` computes:
|
||||
- `docs_only`
|
||||
- `docs_changed`
|
||||
- `rust_changed`
|
||||
- `workflow_changed`
|
||||
5. `build` runs for Rust-impacting changes.
|
||||
6. On PRs, full lint/test/docs checks run when PR has label `ci:full`:
|
||||
- `lint`
|
||||
- `lint-strict-delta`
|
||||
- `test`
|
||||
- `docs-quality`
|
||||
7. If `.github/workflows/**` changed, `workflow-owner-approval` must pass.
|
||||
8. If root license files (`LICENSE-APACHE`, `LICENSE-MIT`) changed, `license-file-owner-guard` allows only PR author `willsarg`.
|
||||
9. `lint-feedback` posts actionable comment if lint/docs gates fail.
|
||||
10. `CI Required Gate` aggregates results to final pass/fail.
|
||||
11. Maintainer merges PR once checks and review policy are satisfied.
|
||||
12. Merge emits a `push` event on `dev` (see scenario 4).
|
||||
|
||||
### 2) PR from fork -> `dev`
|
||||
|
||||
1. External contributor opens PR from `fork/<branch>` into `zeroclaw:dev`.
|
||||
2. Immediately on `opened`:
|
||||
- `pull_request_target` workflows start with base-repo context and base-repo token:
|
||||
- `pr-intake-checks.yml`
|
||||
- `pr-labeler.yml`
|
||||
- `pr-auto-response.yml`
|
||||
- `pull_request` workflows are queued for the fork head commit:
|
||||
- `ci-run.yml`
|
||||
- `sec-audit.yml`
|
||||
- path-scoped workflows (`pub-docker-img.yml`, `workflow-sanity.yml`, `pr-label-policy-check.yml`) if changed files match.
|
||||
3. Fork-specific permission behavior in `pull_request` workflows:
|
||||
- token is restricted (read-focused), so jobs that try to write PR comments/status extras can be limited.
|
||||
- secrets from the base repo are not exposed to fork PR `pull_request` jobs.
|
||||
4. Approval gate possibility:
|
||||
- if Actions settings require maintainer approval for fork workflows, the `pull_request` run stays in `action_required`/waiting state until approved.
|
||||
5. Event fan-out after labeling:
|
||||
- `pr-labeler.yml` and manual label changes emit `labeled`/`unlabeled` events.
|
||||
- those events retrigger `pull_request_target` automation (`pr-labeler.yml` and `pr-auto-response.yml`), creating extra run volume/noise.
|
||||
6. When contributor pushes new commits to fork branch (`synchronize`):
|
||||
- reruns: `pr-intake-checks.yml`, `pr-labeler.yml`, `ci-run.yml`, `sec-audit.yml`, and matching path-scoped PR workflows.
|
||||
- does not rerun `pr-auto-response.yml` unless label/open events occur.
|
||||
7. `ci-run.yml` execution details for fork PR:
|
||||
- `changes` computes `docs_only`, `docs_changed`, `rust_changed`, `workflow_changed`.
|
||||
- `build` runs for Rust-impacting changes.
|
||||
- `lint`/`lint-strict-delta`/`test`/`docs-quality` run on PR when `ci:full` label exists.
|
||||
- `workflow-owner-approval` runs when `.github/workflows/**` changed.
|
||||
- `CI Required Gate` emits final pass/fail for the PR head.
|
||||
8. Fork PR merge blockers to check first when diagnosing stalls:
|
||||
- run approval pending for fork workflows.
|
||||
- `workflow-owner-approval` failing on workflow-file changes.
|
||||
- `license-file-owner-guard` failing when root license files are modified by non-owner PR author.
|
||||
- `CI Required Gate` failure caused by upstream jobs.
|
||||
- repeated `pull_request_target` reruns from label churn causing noisy signals.
|
||||
9. After merge, normal `push` workflows on `dev` execute (scenario 4).
|
||||
|
||||
### 3) Promotion PR `dev` -> `main`
|
||||
|
||||
1. Maintainer opens PR with head `dev` and base `main`.
|
||||
2. `main-promotion-gate.yml` runs and fails unless PR author is `willsarg` or `theonlyhennygod`.
|
||||
3. `main-promotion-gate.yml` also fails if head repo/branch is not `<this-repo>:dev`.
|
||||
4. `ci-run.yml` and `sec-audit.yml` run on the promotion PR.
|
||||
5. Maintainer merges PR once checks and review policy pass.
|
||||
6. Merge emits a `push` event on `main`.
|
||||
|
||||
### 4) Push to `dev` or `main` (including after merge)
|
||||
|
||||
1. Commit reaches `dev` or `main` (usually from a merged PR).
|
||||
2. `ci-run.yml` runs on `push`.
|
||||
3. `sec-audit.yml` runs on `push`.
|
||||
4. Path-filtered workflows run only if touched files match their filters.
|
||||
5. In `ci-run.yml`, push behavior differs from PR behavior:
|
||||
- Rust path: `lint`, `lint-strict-delta`, `test`, `build` are expected.
|
||||
- Docs/non-rust paths: fast-path behavior applies.
|
||||
6. `CI Required Gate` computes overall push result.
|
||||
|
||||
## Docker Publish Logic
|
||||
|
||||
Workflow: `.github/workflows/pub-docker-img.yml`
|
||||
|
||||
### PR behavior
|
||||
|
||||
1. Triggered on `pull_request` to `dev` or `main` when Docker build-input paths change.
|
||||
2. Runs `PR Docker Smoke` job:
|
||||
- Builds local smoke image with Blacksmith builder.
|
||||
- Verifies container with `docker run ... --version`.
|
||||
3. Typical runtime in recent sample: ~240.4s.
|
||||
4. No registry push happens on PR events.
|
||||
|
||||
### Push behavior
|
||||
|
||||
1. `publish` job runs on tag pushes `v*` only.
|
||||
2. Workflow trigger includes semantic version tag pushes (`v*`) only.
|
||||
3. Login to `ghcr.io` uses `${{ github.actor }}` and `${{ secrets.GITHUB_TOKEN }}`.
|
||||
4. Tag computation includes semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag.
|
||||
5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`).
|
||||
6. Typical runtime in recent sample: ~139.9s.
|
||||
7. Result: pushed image tags under `ghcr.io/<owner>/<repo>`.
|
||||
|
||||
Important: Docker publish now requires a `v*` tag push; regular `dev`/`main` branch pushes do not publish images.
|
||||
|
||||
## Release Logic
|
||||
|
||||
Workflow: `.github/workflows/pub-release.yml`
|
||||
|
||||
1. Trigger modes:
|
||||
- Tag push `v*` -> publish mode.
|
||||
- Manual dispatch -> verification-only or publish mode (input-driven).
|
||||
- Weekly schedule -> verification-only mode.
|
||||
2. `prepare` resolves release context (`release_ref`, `release_tag`, publish/draft mode) and validates manual publish inputs.
|
||||
- publish mode enforces `release_tag` == `Cargo.toml` version at the tag commit.
|
||||
3. `build-release` builds matrix artifacts across Linux/macOS/Windows targets.
|
||||
4. `verify-artifacts` enforces presence of all expected archives before any publish attempt.
|
||||
5. In publish mode, workflow generates SBOM (`CycloneDX` + `SPDX`), `SHA256SUMS`, keyless cosign signatures, and verifies GHCR release-tag availability.
|
||||
6. In publish mode, workflow creates/updates the GitHub Release for the resolved tag and commit-ish.
|
||||
|
||||
Manual Homebrew formula flow:
|
||||
|
||||
1. Run `.github/workflows/pub-homebrew-core.yml` with `release_tag=vX.Y.Z`.
|
||||
2. Use `dry_run=true` first to validate formula patch and metadata.
|
||||
3. Use `dry_run=false` to push from bot fork and open `homebrew-core` PR.
|
||||
|
||||
## Merge/Policy Notes
|
||||
|
||||
1. Workflow-file changes (`.github/workflows/**`) activate owner-approval gate in `ci-run.yml`.
|
||||
2. PR lint/test strictness is intentionally controlled by `ci:full` label.
|
||||
3. `sec-audit.yml` runs on both PR and push, plus scheduled weekly.
|
||||
4. Some workflows are operational and non-merge-path (`pr-check-stale`, `pr-check-status`, `sync-contributors`, etc.).
|
||||
5. Workflow-specific JavaScript helpers are organized under `.github/workflows/scripts/`.
|
||||
|
||||
## Mermaid Diagrams
|
||||
|
||||
### PR to Dev
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A["PR opened or updated -> dev"] --> B["pull_request_target lane"]
|
||||
B --> B1["pr-intake-checks.yml"]
|
||||
B --> B2["pr-labeler.yml"]
|
||||
B --> B3["pr-auto-response.yml"]
|
||||
A --> C["pull_request CI lane"]
|
||||
C --> C1["ci-run.yml"]
|
||||
C --> C2["sec-audit.yml"]
|
||||
C --> C3["pub-docker-img.yml (if Docker paths changed)"]
|
||||
C --> C4["workflow-sanity.yml (if workflow files changed)"]
|
||||
C --> C5["pr-label-policy-check.yml (if policy files changed)"]
|
||||
C1 --> D["CI Required Gate"]
|
||||
D --> E{"Checks + review policy pass?"}
|
||||
E -->|No| F["PR stays open"]
|
||||
E -->|Yes| G["Merge PR"]
|
||||
G --> H["push event on dev"]
|
||||
```
|
||||
|
||||
### Promotion and Release
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
D0["Commit reaches dev"] --> B0["ci-run.yml"]
|
||||
D0 --> C0["sec-audit.yml"]
|
||||
P["Promotion PR dev -> main"] --> PG["main-promotion-gate.yml"]
|
||||
PG --> M["Merge to main"]
|
||||
M --> A["Commit reaches main"]
|
||||
A --> B["ci-run.yml"]
|
||||
A --> C["sec-audit.yml"]
|
||||
A --> D["path-scoped workflows (if matched)"]
|
||||
T["Tag push v*"] --> R["pub-release.yml"]
|
||||
W["Manual/Scheduled release verify"] --> R
|
||||
T --> P["pub-docker-img.yml publish job"]
|
||||
R --> R1["Artifacts + SBOM + checksums + signatures + GitHub Release"]
|
||||
W --> R2["Verification build only (no GitHub Release publish)"]
|
||||
P --> P1["Push ghcr image tags (version + sha)"]
|
||||
```
|
||||
|
||||
## Quick Troubleshooting
|
||||
|
||||
1. Unexpected skipped jobs: inspect `scripts/ci/detect_change_scope.sh` outputs.
|
||||
2. Workflow-change PR blocked: verify `WORKFLOW_OWNER_LOGINS` and approvals.
|
||||
3. Fork PR appears stalled: check whether Actions run approval is pending.
|
||||
4. Docker not published: confirm a `v*` tag was pushed to the intended commit.
|
||||
@@ -1,55 +0,0 @@
|
||||
name: Main Promotion Gate
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches: [main]
|
||||
|
||||
concurrency:
|
||||
group: main-promotion-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
enforce-dev-promotion:
|
||||
name: Enforce Dev -> Main Promotion
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Validate PR source branch
|
||||
shell: bash
|
||||
env:
|
||||
HEAD_REF: ${{ github.head_ref }}
|
||||
HEAD_REPO: ${{ github.event.pull_request.head.repo.full_name }}
|
||||
BASE_REPO: ${{ github.repository }}
|
||||
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
pr_author_lc="$(echo "${PR_AUTHOR}" | tr '[:upper:]' '[:lower:]')"
|
||||
allowed_authors=("willsarg" "theonlyhennygod")
|
||||
|
||||
is_allowed_author=false
|
||||
for allowed in "${allowed_authors[@]}"; do
|
||||
if [[ "$pr_author_lc" == "$allowed" ]]; then
|
||||
is_allowed_author=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$is_allowed_author" != "true" ]]; then
|
||||
echo "::error::PRs into main are restricted to: willsarg, theonlyhennygod. PR author: ${PR_AUTHOR}. Open this PR against dev instead."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$HEAD_REPO" != "$BASE_REPO" ]]; then
|
||||
echo "::error::PRs into main must originate from ${BASE_REPO}:dev. Current head repo: ${HEAD_REPO}."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$HEAD_REF" != "dev" ]]; then
|
||||
echo "::error::PRs into main must use head branch 'dev'. Current head branch: ${HEAD_REF}."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Promotion policy satisfied: author=${PR_AUTHOR}, source=${HEAD_REPO}:${HEAD_REF} -> main"
|
||||
@@ -0,0 +1,130 @@
|
||||
# Master Branch Delivery Flows
|
||||
|
||||
This document explains what runs when code is proposed to `master` and released.
|
||||
|
||||
Use this with:
|
||||
|
||||
- [`docs/ci-map.md`](../../docs/contributing/ci-map.md)
|
||||
- [`docs/pr-workflow.md`](../../docs/contributing/pr-workflow.md)
|
||||
- [`docs/release-process.md`](../../docs/contributing/release-process.md)
|
||||
|
||||
## Branching Model
|
||||
|
||||
ZeroClaw uses a single default branch: `master`. All contributor PRs target `master` directly. There is no `dev` or promotion branch.
|
||||
|
||||
Current maintainers with PR approval authority: `theonlyhennygod`, `JordanTheJet`, and `SimianAstronaut7`.
|
||||
|
||||
## Active Workflows
|
||||
|
||||
| File | Trigger | Purpose |
|
||||
| --- | --- | --- |
|
||||
| `checks-on-pr.yml` | `pull_request` → `master` | Lint + test + build + security audit on every PR |
|
||||
| `cross-platform-build-manual.yml` | `workflow_dispatch` | Full platform build matrix (manual) |
|
||||
| `release-beta-on-push.yml` | `push` → `master` | Beta release on every master commit |
|
||||
| `release-stable-manual.yml` | `workflow_dispatch` | Stable release (manual, version-gated) |
|
||||
|
||||
## Event Summary
|
||||
|
||||
| Event | Workflows triggered |
|
||||
| --- | --- |
|
||||
| PR opened or updated against `master` | `checks-on-pr.yml` |
|
||||
| Push to `master` (including after merge) | `release-beta-on-push.yml` |
|
||||
| Manual dispatch | `cross-platform-build-manual.yml`, `release-stable-manual.yml` |
|
||||
|
||||
## Step-By-Step
|
||||
|
||||
### 1) PR → `master`
|
||||
|
||||
1. Contributor opens or updates a PR against `master`.
|
||||
2. `checks-on-pr.yml` starts:
|
||||
- `lint` job: runs `cargo fmt --check` and `cargo clippy -D warnings`.
|
||||
- `test` job: runs `cargo nextest run --locked` on `ubuntu-latest` with Rust 1.92.0 and mold linker.
|
||||
- `build` job (matrix): compiles release binary on `x86_64-unknown-linux-gnu` and `aarch64-apple-darwin`.
|
||||
- `security` job: runs `cargo audit` and `cargo deny check licenses sources`.
|
||||
- Concurrency group cancels in-progress runs for the same PR on new pushes.
|
||||
3. All jobs must pass before merge.
|
||||
4. Maintainer (`theonlyhennygod`, `JordanTheJet`, or `SimianAstronaut7`) merges PR once checks and review policy are satisfied.
|
||||
5. Merge emits a `push` event on `master` (see section 2).
|
||||
|
||||
### 2) Push to `master` (including after merge)
|
||||
|
||||
1. Commit reaches `master`.
|
||||
2. `release-beta-on-push.yml` (Release Beta) starts:
|
||||
- `version` job: computes beta tag as `v{cargo_version}-beta.{run_number}`.
|
||||
- `build` job (matrix, 4 targets): `x86_64-linux`, `aarch64-linux`, `aarch64-darwin`, `x86_64-windows`.
|
||||
- `publish` job: generates `SHA256SUMS`, creates a GitHub pre-release with all artifacts. Artifact retention: 7 days.
|
||||
- `docker` job: builds multi-platform image (`linux/amd64,linux/arm64`) and pushes to `ghcr.io` with `:beta` and the versioned beta tag.
|
||||
3. This runs on every push to `master` without filtering. Every merged PR produces a beta pre-release.
|
||||
|
||||
### 3) Stable Release (manual)
|
||||
|
||||
1. Maintainer runs `release-stable-manual.yml` via `workflow_dispatch` with a version input (e.g. `0.2.0`).
|
||||
2. `validate` job checks:
|
||||
- Input matches semver `X.Y.Z` format.
|
||||
- `Cargo.toml` version matches input exactly.
|
||||
- Tag `vX.Y.Z` does not already exist on the remote.
|
||||
3. `build` job (matrix, same 4 targets as beta): compiles release binary.
|
||||
4. `publish` job: generates `SHA256SUMS`, creates a stable GitHub Release (not pre-release). Artifact retention: 14 days.
|
||||
5. `docker` job: pushes to `ghcr.io` with `:latest` and `:vX.Y.Z`.
|
||||
|
||||
### 4) Full Platform Build (manual)
|
||||
|
||||
1. Maintainer runs `cross-platform-build-manual.yml` via `workflow_dispatch`.
|
||||
2. `build` job (matrix, 3 targets): `aarch64-linux-gnu`, `x86_64-darwin` (macOS 15 Intel), `x86_64-windows-msvc`.
|
||||
3. Build-only, no tests, no publish. Used to verify cross-compilation on platforms not covered by `checks-on-pr.yml`.
|
||||
|
||||
## Build Targets by Workflow
|
||||
|
||||
| Target | `checks-on-pr.yml` | `cross-platform-build-manual.yml` | `release-beta-on-push.yml` | `release-stable-manual.yml` |
|
||||
| --- | :---: | :---: | :---: | :---: |
|
||||
| `x86_64-unknown-linux-gnu` | ✓ | | ✓ | ✓ |
|
||||
| `aarch64-unknown-linux-gnu` | | ✓ | ✓ | ✓ |
|
||||
| `aarch64-apple-darwin` | ✓ | | ✓ | ✓ |
|
||||
| `x86_64-apple-darwin` | | ✓ | | |
|
||||
| `x86_64-pc-windows-msvc` | | ✓ | ✓ | ✓ |
|
||||
|
||||
## Mermaid Diagrams
|
||||
|
||||
### PR to Master
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A["PR opened or updated → master"] --> B["checks-on-pr.yml"]
|
||||
B --> B0["lint: fmt + clippy"]
|
||||
B --> B1["test: cargo nextest (ubuntu-latest)"]
|
||||
B --> B2["build: x86_64-linux + aarch64-darwin"]
|
||||
B --> B3["security: audit + deny"]
|
||||
B0 & B1 & B2 & B3 --> C{"Checks pass?"}
|
||||
C -->|No| D["PR stays open"]
|
||||
C -->|Yes| E["Maintainer merges"]
|
||||
E --> F["push event on master"]
|
||||
```
|
||||
|
||||
### Beta Release (on every master push)
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A["Push to master"] --> B["release-beta-on-push.yml"]
|
||||
B --> B1["version: compute v{x.y.z}-beta.{N}"]
|
||||
B1 --> B2["build: 4 targets"]
|
||||
B2 --> B3["publish: GitHub pre-release + SHA256SUMS"]
|
||||
B2 --> B4["docker: push ghcr.io :beta + versioned tag"]
|
||||
```
|
||||
|
||||
### Stable Release (manual)
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A["workflow_dispatch: version=X.Y.Z"] --> B["release-stable-manual.yml"]
|
||||
B --> B1["validate: semver + Cargo.toml + tag uniqueness"]
|
||||
B1 --> B2["build: 4 targets"]
|
||||
B2 --> B3["publish: GitHub stable release + SHA256SUMS"]
|
||||
B2 --> B4["docker: push ghcr.io :latest + :vX.Y.Z"]
|
||||
```
|
||||
|
||||
## Quick Troubleshooting
|
||||
|
||||
1. **Quality gate failing on PR**: check `lint` job for formatting/clippy issues; check `test` job for test failures; check `build` job for compile errors; check `security` job for audit/deny failures.
|
||||
2. **Beta release not appearing**: confirm the push landed on `master` (not another branch); check `release-beta-on-push.yml` run status.
|
||||
3. **Stable release failing at validate**: ensure `Cargo.toml` version matches the input version and the tag does not already exist.
|
||||
4. **Full matrix build needed**: run `cross-platform-build-manual.yml` manually from the Actions tab.
|
||||
@@ -1,86 +0,0 @@
|
||||
name: PR Auto Responder
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, reopened, labeled, unlabeled]
|
||||
pull_request_target:
|
||||
branches: [dev, main]
|
||||
types: [opened, labeled, unlabeled]
|
||||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
LABEL_POLICY_PATH: .github/label-policy.json
|
||||
|
||||
jobs:
|
||||
contributor-tier-issues:
|
||||
if: >-
|
||||
(github.event_name == 'issues' &&
|
||||
(github.event.action == 'opened' || github.event.action == 'reopened' || github.event.action == 'labeled' || github.event.action == 'unlabeled')) ||
|
||||
(github.event_name == 'pull_request_target' &&
|
||||
(github.event.action == 'labeled' || github.event.action == 'unlabeled'))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Apply contributor tier label for issue author
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
LABEL_POLICY_PATH: .github/label-policy.json
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/pr_auto_response_contributor_tier.js');
|
||||
await script({ github, context, core });
|
||||
first-interaction:
|
||||
if: github.event.action == 'opened'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Greet first-time contributors
|
||||
uses: actions/first-interaction@a1db7729b356323c7988c20ed6f0d33fe31297be # v1
|
||||
with:
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue_message: |
|
||||
Thanks for opening this issue.
|
||||
|
||||
Before maintainers triage it, please confirm:
|
||||
- Repro steps are complete and run on latest `main`
|
||||
- Environment details are included (OS, Rust version, ZeroClaw version)
|
||||
- Sensitive values are redacted
|
||||
|
||||
This helps us keep issue throughput high and response latency low.
|
||||
pr_message: |
|
||||
Thanks for contributing to ZeroClaw.
|
||||
|
||||
For faster review, please ensure:
|
||||
- PR template sections are fully completed
|
||||
- `cargo fmt --all -- --check`, `cargo clippy --all-targets -- -D warnings`, and `cargo test` are included
|
||||
- If automation/agents were used heavily, add brief workflow notes
|
||||
- Scope is focused (prefer one concern per PR)
|
||||
|
||||
See `CONTRIBUTING.md` and `docs/pr-workflow.md` for full collaboration rules.
|
||||
|
||||
labeled-routes:
|
||||
if: github.event.action == 'labeled'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Handle label-driven responses
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/pr_auto_response_labeled_routes.js');
|
||||
await script({ github, context, core });
|
||||
@@ -1,44 +0,0 @@
|
||||
name: PR Check Stale
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "20 2 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Mark stale issues and pull requests
|
||||
uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
days-before-issue-stale: 21
|
||||
days-before-issue-close: 7
|
||||
days-before-pr-stale: 14
|
||||
days-before-pr-close: 7
|
||||
stale-issue-label: stale
|
||||
stale-pr-label: stale
|
||||
exempt-issue-labels: security,pinned,no-stale,no-pr-hygiene,maintainer
|
||||
exempt-pr-labels: no-stale,no-pr-hygiene,maintainer
|
||||
remove-stale-when-updated: true
|
||||
exempt-all-assignees: true
|
||||
operations-per-run: 300
|
||||
stale-issue-message: |
|
||||
This issue was automatically marked as stale due to inactivity.
|
||||
Please provide an update, reproduction details, or current status to keep it open.
|
||||
close-issue-message: |
|
||||
Closing this issue due to inactivity.
|
||||
If the problem still exists on the latest `main`, please open a new issue with fresh repro steps.
|
||||
close-issue-reason: not_planned
|
||||
stale-pr-message: |
|
||||
This PR was automatically marked as stale due to inactivity.
|
||||
Please rebase/update and post the latest validation results.
|
||||
close-pr-message: |
|
||||
Closing this PR due to inactivity.
|
||||
Maintainers can reopen once the branch is updated and validation is provided.
|
||||
@@ -1,32 +0,0 @@
|
||||
name: PR Check Status
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "15 8 * * *" # Once daily at 8:15am UTC
|
||||
workflow_dispatch:
|
||||
|
||||
permissions: {}
|
||||
|
||||
concurrency:
|
||||
group: pr-check-status
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
nudge-stale-prs:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
env:
|
||||
STALE_HOURS: "48"
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Nudge PRs that need rebase or CI refresh
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/pr_check_status_nudge.js');
|
||||
await script({ github, context, core });
|
||||
@@ -1,31 +0,0 @@
|
||||
name: PR Intake Checks
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
branches: [dev, main]
|
||||
types: [opened, reopened, synchronize, edited, ready_for_review]
|
||||
|
||||
concurrency:
|
||||
group: pr-intake-checks-${{ github.event.pull_request.number || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
intake:
|
||||
name: Intake Checks
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Run safe PR intake checks
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/pr_intake_checks.js');
|
||||
await script({ github, context, core });
|
||||
@@ -1,74 +0,0 @@
|
||||
name: PR Label Policy Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/label-policy.json"
|
||||
- ".github/workflows/pr-labeler.yml"
|
||||
- ".github/workflows/pr-auto-response.yml"
|
||||
push:
|
||||
paths:
|
||||
- ".github/label-policy.json"
|
||||
- ".github/workflows/pr-labeler.yml"
|
||||
- ".github/workflows/pr-auto-response.yml"
|
||||
|
||||
concurrency:
|
||||
group: pr-label-policy-check-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
contributor-tier-consistency:
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Verify shared label policy and workflow wiring
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
python3 - <<'PY'
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
policy_path = Path('.github/label-policy.json')
|
||||
policy = json.loads(policy_path.read_text(encoding='utf-8'))
|
||||
color = str(policy.get('contributor_tier_color', '')).upper()
|
||||
rules = policy.get('contributor_tiers', [])
|
||||
if not re.fullmatch(r'[0-9A-F]{6}', color):
|
||||
raise SystemExit('invalid contributor_tier_color in .github/label-policy.json')
|
||||
if not rules:
|
||||
raise SystemExit('contributor_tiers must not be empty in .github/label-policy.json')
|
||||
|
||||
labels = set()
|
||||
prev_min = None
|
||||
for entry in rules:
|
||||
label = str(entry.get('label', '')).strip().lower()
|
||||
min_merged = int(entry.get('min_merged_prs', 0))
|
||||
if not label.endswith('contributor'):
|
||||
raise SystemExit(f'invalid contributor tier label: {label}')
|
||||
if label in labels:
|
||||
raise SystemExit(f'duplicate contributor tier label: {label}')
|
||||
if prev_min is not None and min_merged > prev_min:
|
||||
raise SystemExit('contributor_tiers must be sorted descending by min_merged_prs')
|
||||
labels.add(label)
|
||||
prev_min = min_merged
|
||||
|
||||
workflow_paths = [
|
||||
Path('.github/workflows/pr-labeler.yml'),
|
||||
Path('.github/workflows/pr-auto-response.yml'),
|
||||
]
|
||||
for workflow in workflow_paths:
|
||||
text = workflow.read_text(encoding='utf-8')
|
||||
if '.github/label-policy.json' not in text:
|
||||
raise SystemExit(f'{workflow} must load .github/label-policy.json')
|
||||
if re.search(r'contributorTierColor\s*=\s*"[0-9A-Fa-f]{6}"', text):
|
||||
raise SystemExit(f'{workflow} contains hardcoded contributorTierColor')
|
||||
|
||||
print('label policy file is valid and workflow consumers are wired to shared policy')
|
||||
PY
|
||||
@@ -1,53 +0,0 @@
|
||||
name: PR Labeler
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
branches: [dev, main]
|
||||
types: [opened, reopened, synchronize, edited, labeled, unlabeled]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
mode:
|
||||
description: "Run mode for managed-label governance"
|
||||
required: true
|
||||
default: "audit"
|
||||
type: choice
|
||||
options:
|
||||
- audit
|
||||
- repair
|
||||
|
||||
concurrency:
|
||||
group: pr-labeler-${{ github.event.pull_request.number || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
issues: write
|
||||
|
||||
env:
|
||||
LABEL_POLICY_PATH: .github/label-policy.json
|
||||
|
||||
jobs:
|
||||
label:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Apply path labels
|
||||
if: github.event_name == 'pull_request_target'
|
||||
uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
|
||||
continue-on-error: true
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
sync-labels: true
|
||||
|
||||
- name: Apply size/risk/module labels
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
continue-on-error: true
|
||||
env:
|
||||
LABEL_POLICY_PATH: .github/label-policy.json
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/pr_labeler.js');
|
||||
await script({ github, context, core });
|
||||
@@ -1,175 +0,0 @@
|
||||
name: Pub Docker Img
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["v*"]
|
||||
pull_request:
|
||||
branches: [dev, main]
|
||||
paths:
|
||||
- "Dockerfile"
|
||||
- ".dockerignore"
|
||||
- "docker-compose.yml"
|
||||
- "rust-toolchain.toml"
|
||||
- "dev/config.template.toml"
|
||||
- ".github/workflows/pub-docker-img.yml"
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: docker-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
pr-smoke:
|
||||
name: PR Docker Smoke
|
||||
if: github.event_name == 'workflow_dispatch' || (github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository)
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 25
|
||||
permissions:
|
||||
contents: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Setup Blacksmith Builder
|
||||
uses: useblacksmith/setup-docker-builder@ef12d5b165b596e3aa44ea8198d8fde563eab402 # v1
|
||||
|
||||
- name: Extract metadata (tags, labels)
|
||||
if: github.event_name == 'pull_request'
|
||||
id: meta
|
||||
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
tags: |
|
||||
type=ref,event=pr
|
||||
|
||||
- name: Build smoke image
|
||||
uses: useblacksmith/build-push-action@30c71162f16ea2c27c3e21523255d209b8b538c1 # v2
|
||||
with:
|
||||
context: .
|
||||
push: false
|
||||
load: true
|
||||
provenance: false
|
||||
sbom: false
|
||||
tags: zeroclaw-pr-smoke:latest
|
||||
labels: ${{ steps.meta.outputs.labels || '' }}
|
||||
platforms: linux/amd64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Verify image
|
||||
run: docker run --rm zeroclaw-pr-smoke:latest --version
|
||||
|
||||
publish:
|
||||
name: Build and Push Docker Image
|
||||
if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') && github.repository == 'zeroclaw-labs/zeroclaw'
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 45
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Setup Blacksmith Builder
|
||||
uses: useblacksmith/setup-docker-builder@ef12d5b165b596e3aa44ea8198d8fde563eab402 # v1
|
||||
|
||||
- name: Log in to Container Registry
|
||||
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Compute tags
|
||||
id: meta
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
IMAGE="${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}"
|
||||
SHA_TAG="${IMAGE}:sha-${GITHUB_SHA::12}"
|
||||
if [[ "${GITHUB_REF}" != refs/tags/v* ]]; then
|
||||
echo "::error::Docker publish is restricted to v* tag pushes."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TAG_NAME="${GITHUB_REF#refs/tags/}"
|
||||
TAGS="${IMAGE}:${TAG_NAME},${SHA_TAG}"
|
||||
|
||||
echo "tags=${TAGS}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Build and push Docker image
|
||||
uses: useblacksmith/build-push-action@30c71162f16ea2c27c3e21523255d209b8b538c1 # v2
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
- name: Set GHCR package visibility to public
|
||||
shell: bash
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
owner="${GITHUB_REPOSITORY_OWNER,,}"
|
||||
repo="${GITHUB_REPOSITORY#*/}"
|
||||
|
||||
# Package path can vary depending on repository/package linkage.
|
||||
candidates=(
|
||||
"$repo"
|
||||
"${owner}%2F${repo}"
|
||||
)
|
||||
|
||||
for scope in orgs users; do
|
||||
for pkg in "${candidates[@]}"; do
|
||||
code="$(curl -sS -o /tmp/ghcr-visibility.json -w "%{http_code}" \
|
||||
-X PATCH \
|
||||
-H "Authorization: Bearer ${GH_TOKEN}" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
-H "X-GitHub-Api-Version: 2022-11-28" \
|
||||
"https://api.github.com/${scope}/${owner}/packages/container/${pkg}/visibility" \
|
||||
-d '{"visibility":"public"}' || true)"
|
||||
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "GHCR package visibility is public (${scope}/${owner}/${pkg})."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Visibility attempt ${scope}/${owner}/${pkg} returned HTTP ${code}."
|
||||
done
|
||||
done
|
||||
|
||||
echo "::warning::Unable to update GHCR visibility via API in this run; proceeding to direct anonymous pull verification."
|
||||
|
||||
- name: Verify anonymous GHCR pull access
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
TAG_NAME="${GITHUB_REF#refs/tags/}"
|
||||
token_resp="$(curl -sS "https://ghcr.io/token?scope=repository:${GITHUB_REPOSITORY}:pull")"
|
||||
token="$(echo "$token_resp" | sed -n 's/.*"token":"\([^"]*\)".*/\1/p')"
|
||||
|
||||
if [ -z "$token" ]; then
|
||||
echo "::error::Anonymous GHCR token request failed: $token_resp"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
code="$(curl -sS -o /tmp/ghcr-manifest.json -w "%{http_code}" \
|
||||
-H "Authorization: Bearer ${token}" \
|
||||
-H "Accept: application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json" \
|
||||
"https://ghcr.io/v2/${GITHUB_REPOSITORY}/manifests/${TAG_NAME}")"
|
||||
|
||||
if [ "$code" != "200" ]; then
|
||||
echo "::error::Anonymous manifest pull failed with HTTP ${code}"
|
||||
cat /tmp/ghcr-manifest.json || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Anonymous GHCR pull access verified."
|
||||
@@ -1,221 +0,0 @@
|
||||
name: Pub Homebrew Core
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_tag:
|
||||
description: "Existing release tag to publish (vX.Y.Z)"
|
||||
required: true
|
||||
type: string
|
||||
dry_run:
|
||||
description: "Patch formula only (no push/PR)"
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
concurrency:
|
||||
group: homebrew-core-${{ github.run_id }}
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
publish-homebrew-core:
|
||||
name: Publish Homebrew Core PR
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
env:
|
||||
UPSTREAM_REPO: Homebrew/homebrew-core
|
||||
FORMULA_PATH: Formula/z/zeroclaw.rb
|
||||
RELEASE_TAG: ${{ inputs.release_tag }}
|
||||
DRY_RUN: ${{ inputs.dry_run }}
|
||||
BOT_FORK_REPO: ${{ vars.HOMEBREW_CORE_BOT_FORK_REPO }}
|
||||
BOT_EMAIL: ${{ vars.HOMEBREW_CORE_BOT_EMAIL }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Validate release tag and version alignment
|
||||
id: release_meta
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
|
||||
if [[ ! "$RELEASE_TAG" =~ $semver_pattern ]]; then
|
||||
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! git rev-parse "refs/tags/${RELEASE_TAG}" >/dev/null 2>&1; then
|
||||
git fetch --tags origin
|
||||
fi
|
||||
|
||||
tag_version="${RELEASE_TAG#v}"
|
||||
cargo_version="$(git show "${RELEASE_TAG}:Cargo.toml" | sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
|
||||
if [[ -z "$cargo_version" ]]; then
|
||||
echo "::error::Unable to read Cargo.toml version from tag ${RELEASE_TAG}."
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$cargo_version" != "$tag_version" ]]; then
|
||||
echo "::error::Tag ${RELEASE_TAG} does not match Cargo.toml version (${cargo_version})."
|
||||
echo "::error::Bump Cargo.toml first, then publish Homebrew."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
tarball_url="https://github.com/${GITHUB_REPOSITORY}/archive/refs/tags/${RELEASE_TAG}.tar.gz"
|
||||
tarball_sha="$(curl -fsSL "$tarball_url" | sha256sum | awk '{print $1}')"
|
||||
|
||||
{
|
||||
echo "tag_version=$tag_version"
|
||||
echo "tarball_url=$tarball_url"
|
||||
echo "tarball_sha=$tarball_sha"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
{
|
||||
echo "### Release Metadata"
|
||||
echo "- release_tag: ${RELEASE_TAG}"
|
||||
echo "- cargo_version: ${cargo_version}"
|
||||
echo "- tarball_sha256: ${tarball_sha}"
|
||||
echo "- dry_run: ${DRY_RUN}"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Patch Homebrew formula
|
||||
id: patch_formula
|
||||
shell: bash
|
||||
env:
|
||||
HOMEBREW_CORE_BOT_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
|
||||
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
tmp_repo="$(mktemp -d)"
|
||||
echo "tmp_repo=$tmp_repo" >> "$GITHUB_OUTPUT"
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
git clone --depth=1 "https://github.com/${UPSTREAM_REPO}.git" "$tmp_repo/homebrew-core"
|
||||
else
|
||||
if [[ -z "${BOT_FORK_REPO}" ]]; then
|
||||
echo "::error::Repository variable HOMEBREW_CORE_BOT_FORK_REPO is required when dry_run=false."
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${HOMEBREW_CORE_BOT_TOKEN}" ]]; then
|
||||
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is required when dry_run=false."
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$BOT_FORK_REPO" != */* ]]; then
|
||||
echo "::error::HOMEBREW_CORE_BOT_FORK_REPO must be in owner/repo format."
|
||||
exit 1
|
||||
fi
|
||||
if ! command -v gh >/dev/null 2>&1; then
|
||||
echo "::error::gh CLI is required on the runner."
|
||||
exit 1
|
||||
fi
|
||||
if [[ -z "${GH_TOKEN:-}" ]]; then
|
||||
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is missing."
|
||||
exit 1
|
||||
fi
|
||||
if ! gh api "repos/${BOT_FORK_REPO}" >/dev/null 2>&1; then
|
||||
echo "::error::HOMEBREW_CORE_BOT_TOKEN cannot access ${BOT_FORK_REPO}."
|
||||
exit 1
|
||||
fi
|
||||
gh repo clone "${BOT_FORK_REPO}" "$tmp_repo/homebrew-core" -- --depth=1
|
||||
fi
|
||||
|
||||
repo_dir="$tmp_repo/homebrew-core"
|
||||
formula_file="$repo_dir/$FORMULA_PATH"
|
||||
if [[ ! -f "$formula_file" ]]; then
|
||||
echo "::error::Formula file not found: $FORMULA_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "false" ]]; then
|
||||
if git -C "$repo_dir" remote get-url upstream >/dev/null 2>&1; then
|
||||
git -C "$repo_dir" remote set-url upstream "https://github.com/${UPSTREAM_REPO}.git"
|
||||
else
|
||||
git -C "$repo_dir" remote add upstream "https://github.com/${UPSTREAM_REPO}.git"
|
||||
fi
|
||||
if git -C "$repo_dir" ls-remote --exit-code --heads upstream main >/dev/null 2>&1; then
|
||||
upstream_ref="main"
|
||||
else
|
||||
upstream_ref="master"
|
||||
fi
|
||||
git -C "$repo_dir" fetch --depth=1 upstream "$upstream_ref"
|
||||
branch_name="zeroclaw-${RELEASE_TAG}-${GITHUB_RUN_ID}"
|
||||
git -C "$repo_dir" checkout -B "$branch_name" "upstream/$upstream_ref"
|
||||
echo "branch_name=$branch_name" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
tarball_url="${{ steps.release_meta.outputs.tarball_url }}"
|
||||
tarball_sha="${{ steps.release_meta.outputs.tarball_sha }}"
|
||||
|
||||
perl -0pi -e "s|^ url \".*\"| url \"${tarball_url}\"|m" "$formula_file"
|
||||
perl -0pi -e "s|^ sha256 \".*\"| sha256 \"${tarball_sha}\"|m" "$formula_file"
|
||||
perl -0pi -e "s|^ license \".*\"| license \"Apache-2.0 OR MIT\"|m" "$formula_file"
|
||||
perl -0pi -e 's|^ head "https://github\.com/zeroclaw-labs/zeroclaw\.git".*| head "https://github.com/zeroclaw-labs/zeroclaw.git"|m' "$formula_file"
|
||||
|
||||
git -C "$repo_dir" diff -- "$FORMULA_PATH" > "$tmp_repo/formula.diff"
|
||||
if [[ ! -s "$tmp_repo/formula.diff" ]]; then
|
||||
echo "::error::No formula changes generated. Nothing to publish."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
{
|
||||
echo "### Formula Diff"
|
||||
echo '```diff'
|
||||
cat "$tmp_repo/formula.diff"
|
||||
echo '```'
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Push branch and open Homebrew PR
|
||||
if: ${{ inputs.dry_run == false }}
|
||||
shell: bash
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.HOMEBREW_UPSTREAM_PR_TOKEN || secrets.HOMEBREW_CORE_BOT_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
repo_dir="${{ steps.patch_formula.outputs.tmp_repo }}/homebrew-core"
|
||||
branch_name="${{ steps.patch_formula.outputs.branch_name }}"
|
||||
tag_version="${{ steps.release_meta.outputs.tag_version }}"
|
||||
fork_owner="${BOT_FORK_REPO%%/*}"
|
||||
bot_email="${BOT_EMAIL:-${fork_owner}@users.noreply.github.com}"
|
||||
|
||||
git -C "$repo_dir" config user.name "$fork_owner"
|
||||
git -C "$repo_dir" config user.email "$bot_email"
|
||||
git -C "$repo_dir" add "$FORMULA_PATH"
|
||||
git -C "$repo_dir" commit -m "zeroclaw ${tag_version}"
|
||||
if [[ -z "${GH_TOKEN:-}" ]]; then
|
||||
echo "::error::Repository secret HOMEBREW_CORE_BOT_TOKEN is missing."
|
||||
exit 1
|
||||
fi
|
||||
gh auth setup-git
|
||||
git -C "$repo_dir" push --set-upstream origin "$branch_name"
|
||||
|
||||
pr_title="zeroclaw ${tag_version}"
|
||||
pr_body=$(cat <<EOF
|
||||
Automated formula bump from ZeroClaw release workflow.
|
||||
|
||||
- Release tag: ${RELEASE_TAG}
|
||||
- Source tarball: ${{ steps.release_meta.outputs.tarball_url }}
|
||||
- Source sha256: ${{ steps.release_meta.outputs.tarball_sha }}
|
||||
EOF
|
||||
)
|
||||
|
||||
gh pr create \
|
||||
--repo "$UPSTREAM_REPO" \
|
||||
--base main \
|
||||
--head "${fork_owner}:${branch_name}" \
|
||||
--title "$pr_title" \
|
||||
--body "$pr_body"
|
||||
|
||||
- name: Summary output
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
echo "Dry run complete: formula diff generated, no push/PR performed."
|
||||
else
|
||||
echo "Publish complete: branch pushed and PR opened from bot fork."
|
||||
fi
|
||||
@@ -1,435 +0,0 @@
|
||||
name: Pub Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: ["v*"]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
release_ref:
|
||||
description: "Git ref (branch, tag, or SHA) to build"
|
||||
required: false
|
||||
default: "main"
|
||||
type: string
|
||||
publish_release:
|
||||
description: "Publish a GitHub release (false = verification build only)"
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
release_tag:
|
||||
description: "Existing release tag (required when publish_release=true), e.g. v0.1.1"
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
draft:
|
||||
description: "Create release as draft (manual publish only)"
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
schedule:
|
||||
# Weekly release-readiness verification on default branch (no publish)
|
||||
- cron: "17 8 * * 1"
|
||||
|
||||
concurrency:
|
||||
group: release-${{ github.ref || github.run_id }}
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
packages: read
|
||||
id-token: write # Required for cosign keyless signing via OIDC
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
prepare:
|
||||
name: Prepare Release Context
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
outputs:
|
||||
release_ref: ${{ steps.vars.outputs.release_ref }}
|
||||
release_tag: ${{ steps.vars.outputs.release_tag }}
|
||||
publish_release: ${{ steps.vars.outputs.publish_release }}
|
||||
draft_release: ${{ steps.vars.outputs.draft_release }}
|
||||
steps:
|
||||
- name: Resolve release inputs
|
||||
id: vars
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
event_name="${GITHUB_EVENT_NAME}"
|
||||
publish_release="false"
|
||||
draft_release="false"
|
||||
semver_pattern='^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?$'
|
||||
|
||||
if [[ "$event_name" == "push" ]]; then
|
||||
release_ref="${GITHUB_REF_NAME}"
|
||||
release_tag="${GITHUB_REF_NAME}"
|
||||
publish_release="true"
|
||||
elif [[ "$event_name" == "workflow_dispatch" ]]; then
|
||||
release_ref="${{ inputs.release_ref }}"
|
||||
publish_release="${{ inputs.publish_release }}"
|
||||
draft_release="${{ inputs.draft }}"
|
||||
|
||||
if [[ "$publish_release" == "true" ]]; then
|
||||
release_tag="${{ inputs.release_tag }}"
|
||||
if [[ -z "$release_tag" ]]; then
|
||||
echo "::error::release_tag is required when publish_release=true"
|
||||
exit 1
|
||||
fi
|
||||
release_ref="$release_tag"
|
||||
else
|
||||
release_tag="verify-${GITHUB_SHA::12}"
|
||||
fi
|
||||
else
|
||||
# schedule
|
||||
release_ref="main"
|
||||
release_tag="verify-${GITHUB_SHA::12}"
|
||||
fi
|
||||
|
||||
if [[ "$publish_release" == "true" ]]; then
|
||||
if [[ ! "$release_tag" =~ $semver_pattern ]]; then
|
||||
echo "::error::release_tag must match semver-like format (vX.Y.Z[-suffix])"
|
||||
exit 1
|
||||
fi
|
||||
if ! git ls-remote --exit-code --tags "https://github.com/${GITHUB_REPOSITORY}.git" "refs/tags/${release_tag}" >/dev/null; then
|
||||
echo "::error::Tag ${release_tag} does not exist on origin. Push the tag first, then rerun manual publish."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Guardrail: release tags must resolve to commits already reachable from main.
|
||||
tmp_repo="$(mktemp -d)"
|
||||
trap 'rm -rf "$tmp_repo"' EXIT
|
||||
git -C "$tmp_repo" init -q
|
||||
git -C "$tmp_repo" remote add origin "https://github.com/${GITHUB_REPOSITORY}.git"
|
||||
git -C "$tmp_repo" fetch --quiet --filter=blob:none origin main "refs/tags/${release_tag}:refs/tags/${release_tag}"
|
||||
if ! git -C "$tmp_repo" merge-base --is-ancestor "refs/tags/${release_tag}" "origin/main"; then
|
||||
echo "::error::Tag ${release_tag} is not reachable from origin/main. Release tags must be cut from main."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Guardrail: release tag and Cargo package version must stay aligned.
|
||||
tag_version="${release_tag#v}"
|
||||
cargo_version="$(git -C "$tmp_repo" show "refs/tags/${release_tag}:Cargo.toml" | sed -n 's/^version = "\([^"]*\)"/\1/p' | head -n1)"
|
||||
if [[ -z "$cargo_version" ]]; then
|
||||
echo "::error::Unable to read Cargo package version from ${release_tag}:Cargo.toml"
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$cargo_version" != "$tag_version" ]]; then
|
||||
echo "::error::Tag ${release_tag} does not match Cargo.toml version (${cargo_version})."
|
||||
echo "::error::Bump Cargo.toml version first, then create/publish the matching tag."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
{
|
||||
echo "release_ref=${release_ref}"
|
||||
echo "release_tag=${release_tag}"
|
||||
echo "publish_release=${publish_release}"
|
||||
echo "draft_release=${draft_release}"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
|
||||
{
|
||||
echo "### Release Context"
|
||||
echo "- event: ${event_name}"
|
||||
echo "- release_ref: ${release_ref}"
|
||||
echo "- release_tag: ${release_tag}"
|
||||
echo "- publish_release: ${publish_release}"
|
||||
echo "- draft_release: ${draft_release}"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
build-release:
|
||||
name: Build ${{ matrix.target }}
|
||||
needs: [prepare]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
target: x86_64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
- os: ubuntu-latest
|
||||
target: aarch64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: gcc-aarch64-linux-gnu
|
||||
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
|
||||
linker: aarch64-linux-gnu-gcc
|
||||
- os: ubuntu-latest
|
||||
target: armv7-unknown-linux-gnueabihf
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: gcc-arm-linux-gnueabihf
|
||||
linker_env: CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER
|
||||
linker: arm-linux-gnueabihf-gcc
|
||||
- os: ubuntu-latest
|
||||
target: armv7-linux-androideabi
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
android_ndk: true
|
||||
android_api: 21
|
||||
- os: ubuntu-latest
|
||||
target: aarch64-linux-android
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
android_ndk: true
|
||||
android_api: 21
|
||||
- os: macos-15-intel
|
||||
target: x86_64-apple-darwin
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
- os: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
artifact: zeroclaw
|
||||
archive_ext: tar.gz
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
- os: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
artifact: zeroclaw.exe
|
||||
archive_ext: zip
|
||||
cross_compiler: ""
|
||||
linker_env: ""
|
||||
linker: ""
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
ref: ${{ needs.prepare.outputs.release_ref }}
|
||||
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
if: runner.os != 'Windows'
|
||||
|
||||
- name: Install cross-compilation toolchain (Linux)
|
||||
if: runner.os == 'Linux' && matrix.cross_compiler != ''
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y ${{ matrix.cross_compiler }}
|
||||
|
||||
- name: Setup Android NDK
|
||||
if: matrix.android_ndk
|
||||
uses: nttld/setup-ndk@v1
|
||||
id: setup-ndk
|
||||
with:
|
||||
ndk-version: r26d
|
||||
add-to-path: true
|
||||
|
||||
- name: Configure Android toolchain
|
||||
if: matrix.android_ndk
|
||||
run: |
|
||||
echo "Setting up Android NDK toolchain for ${{ matrix.target }}"
|
||||
NDK_HOME="${{ steps.setup-ndk.outputs.ndk-path }}"
|
||||
TOOLCHAIN="$NDK_HOME/toolchains/llvm/prebuilt/linux-x86_64/bin"
|
||||
|
||||
# Add to path for linker resolution
|
||||
echo "$TOOLCHAIN" >> $GITHUB_PATH
|
||||
|
||||
# Set linker environment variables
|
||||
if [[ "${{ matrix.target }}" == "armv7-linux-androideabi" ]]; then
|
||||
echo "CARGO_TARGET_ARMV7_LINUX_ANDROIDEABI_LINKER=${TOOLCHAIN}/armv7a-linux-androideabi${{ matrix.android_api }}-clang" >> $GITHUB_ENV
|
||||
elif [[ "${{ matrix.target }}" == "aarch64-linux-android" ]]; then
|
||||
echo "CARGO_TARGET_AARCH64_LINUX_ANDROID_LINKER=${TOOLCHAIN}/aarch64-linux-android${{ matrix.android_api }}-clang" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- name: Build release
|
||||
shell: bash
|
||||
env:
|
||||
LINKER_ENV: ${{ matrix.linker_env }}
|
||||
LINKER: ${{ matrix.linker }}
|
||||
run: |
|
||||
if [ -n "$LINKER_ENV" ] && [ -n "$LINKER" ]; then
|
||||
echo "Using linker override: $LINKER_ENV=$LINKER"
|
||||
export "$LINKER_ENV=$LINKER"
|
||||
fi
|
||||
cargo build --profile release-fast --locked --target ${{ matrix.target }}
|
||||
|
||||
- name: Check binary size (Unix)
|
||||
if: runner.os != 'Windows'
|
||||
run: bash scripts/ci/check_binary_size.sh "target/${{ matrix.target }}/release-fast/${{ matrix.artifact }}" "${{ matrix.target }}"
|
||||
|
||||
- name: Package (Unix)
|
||||
if: runner.os != 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release-fast
|
||||
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
|
||||
|
||||
- name: Package (Windows)
|
||||
if: runner.os == 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release-fast
|
||||
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }} ${{ matrix.artifact }}
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: zeroclaw-${{ matrix.target }}
|
||||
path: zeroclaw-${{ matrix.target }}.${{ matrix.archive_ext }}
|
||||
retention-days: 7
|
||||
|
||||
verify-artifacts:
|
||||
name: Verify Artifact Set
|
||||
needs: [prepare, build-release]
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Validate expected archives
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
expected=(
|
||||
"zeroclaw-x86_64-unknown-linux-gnu.tar.gz"
|
||||
"zeroclaw-aarch64-unknown-linux-gnu.tar.gz"
|
||||
"zeroclaw-armv7-unknown-linux-gnueabihf.tar.gz"
|
||||
"zeroclaw-armv7-linux-androideabi.tar.gz"
|
||||
"zeroclaw-aarch64-linux-android.tar.gz"
|
||||
"zeroclaw-x86_64-apple-darwin.tar.gz"
|
||||
"zeroclaw-aarch64-apple-darwin.tar.gz"
|
||||
"zeroclaw-x86_64-pc-windows-msvc.zip"
|
||||
)
|
||||
|
||||
missing=0
|
||||
for file in "${expected[@]}"; do
|
||||
if ! find artifacts -type f -name "$file" -print -quit | grep -q .; then
|
||||
echo "::error::Missing release archive: $file"
|
||||
missing=1
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$missing" -ne 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "All expected release archives are present."
|
||||
|
||||
publish:
|
||||
name: Publish Release
|
||||
if: needs.prepare.outputs.publish_release == 'true'
|
||||
needs: [prepare, verify-artifacts]
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 45
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
with:
|
||||
ref: ${{ needs.prepare.outputs.release_ref }}
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
|
||||
with:
|
||||
path: artifacts
|
||||
|
||||
- name: Install syft
|
||||
run: |
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin
|
||||
|
||||
- name: Generate SBOM (CycloneDX)
|
||||
run: |
|
||||
syft dir:. --source-name zeroclaw -o cyclonedx-json=artifacts/zeroclaw.cdx.json -o spdx-json=artifacts/zeroclaw.spdx.json
|
||||
{
|
||||
echo "### SBOM Generated"
|
||||
echo "- CycloneDX: zeroclaw.cdx.json"
|
||||
echo "- SPDX: zeroclaw.spdx.json"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
- name: Attach license and notice files
|
||||
run: |
|
||||
cp LICENSE-APACHE artifacts/LICENSE-APACHE
|
||||
cp LICENSE-MIT artifacts/LICENSE-MIT
|
||||
cp NOTICE artifacts/NOTICE
|
||||
|
||||
- name: Generate SHA256 checksums
|
||||
run: |
|
||||
cd artifacts
|
||||
find . -type f \( -name '*.tar.gz' -o -name '*.zip' -o -name '*.cdx.json' -o -name '*.spdx.json' -o -name 'LICENSE-APACHE' -o -name 'LICENSE-MIT' -o -name 'NOTICE' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
|
||||
echo "Generated checksums:"
|
||||
cat SHA256SUMS
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
|
||||
|
||||
- name: Sign artifacts with cosign (keyless)
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
while IFS= read -r -d '' file; do
|
||||
cosign sign-blob --yes \
|
||||
--bundle="${file}.sigstore.json" \
|
||||
--output-signature="${file}.sig" \
|
||||
--output-certificate="${file}.pem" \
|
||||
"$file"
|
||||
done < <(find artifacts -type f ! -name '*.sig' ! -name '*.pem' ! -name '*.sigstore.json' -print0)
|
||||
|
||||
- name: Verify GHCR release tag availability
|
||||
shell: bash
|
||||
env:
|
||||
RELEASE_TAG: ${{ needs.prepare.outputs.release_tag }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
repo="${GITHUB_REPOSITORY,,}"
|
||||
manifest_url="https://ghcr.io/v2/${repo}/manifests/${RELEASE_TAG}"
|
||||
accept_header="application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json"
|
||||
max_attempts=75
|
||||
sleep_seconds=20
|
||||
|
||||
for attempt in $(seq 1 "$max_attempts"); do
|
||||
token_resp="$(curl -sS "https://ghcr.io/token?scope=repository:${repo}:pull" || true)"
|
||||
token="$(echo "$token_resp" | sed -n 's/.*"token":"\([^"]*\)".*/\1/p')"
|
||||
|
||||
if [ -z "$token" ]; then
|
||||
code="000"
|
||||
else
|
||||
code="$(curl -sS -o /tmp/ghcr-release-manifest.json -w "%{http_code}" \
|
||||
-H "Authorization: Bearer ${token}" \
|
||||
-H "Accept: ${accept_header}" \
|
||||
"${manifest_url}" || true)"
|
||||
fi
|
||||
|
||||
if [ "$code" = "200" ]; then
|
||||
echo "GHCR release tag is available: ${repo}:${RELEASE_TAG}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$attempt" -lt "$max_attempts" ]; then
|
||||
echo "Waiting for GHCR tag ${repo}:${RELEASE_TAG} (attempt ${attempt}/${max_attempts}, HTTP ${code})..."
|
||||
sleep "$sleep_seconds"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "::error::GHCR tag ${repo}:${RELEASE_TAG} was not available before release publish timeout."
|
||||
cat /tmp/ghcr-release-manifest.json || true
|
||||
exit 1
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2
|
||||
with:
|
||||
tag_name: ${{ needs.prepare.outputs.release_tag }}
|
||||
draft: ${{ needs.prepare.outputs.draft_release == 'true' }}
|
||||
generate_release_notes: true
|
||||
files: |
|
||||
artifacts/**/*
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
@@ -0,0 +1,189 @@
|
||||
name: Release Beta
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master]
|
||||
|
||||
concurrency:
|
||||
group: release
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
packages: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
version:
|
||||
name: Resolve Version
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
version: ${{ steps.ver.outputs.version }}
|
||||
tag: ${{ steps.ver.outputs.tag }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- name: Compute beta version
|
||||
id: ver
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
base_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
|
||||
beta_tag="v${base_version}-beta.${GITHUB_RUN_NUMBER}"
|
||||
echo "version=${base_version}" >> "$GITHUB_OUTPUT"
|
||||
echo "tag=${beta_tag}" >> "$GITHUB_OUTPUT"
|
||||
echo "Beta release: ${beta_tag}"
|
||||
|
||||
web:
|
||||
name: Build Web Dashboard
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
cache: npm
|
||||
cache-dependency-path: web/package-lock.json
|
||||
- name: Build web dashboard
|
||||
run: cd web && npm ci && npm run build
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
retention-days: 1
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.target }}
|
||||
needs: [version, web]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-22.04
|
||||
target: x86_64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
- os: ubuntu-22.04
|
||||
target: aarch64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
cross_compiler: gcc-aarch64-linux-gnu
|
||||
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
|
||||
linker: aarch64-linux-gnu-gcc
|
||||
- os: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
- os: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
artifact: zeroclaw.exe
|
||||
ext: zip
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
if: runner.os != 'Windows'
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
|
||||
- name: Install cross compiler
|
||||
if: matrix.cross_compiler
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y ${{ matrix.cross_compiler }}
|
||||
|
||||
- name: Build release
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
|
||||
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
|
||||
fi
|
||||
cargo build --release --locked --target ${{ matrix.target }}
|
||||
|
||||
- name: Package (Unix)
|
||||
if: runner.os != 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release
|
||||
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
|
||||
|
||||
- name: Package (Windows)
|
||||
if: runner.os == 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release
|
||||
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
|
||||
|
||||
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
with:
|
||||
name: zeroclaw-${{ matrix.target }}
|
||||
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
|
||||
retention-days: 7
|
||||
|
||||
publish:
|
||||
name: Publish Beta Release
|
||||
needs: [version, build]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
|
||||
with:
|
||||
pattern: zeroclaw-*
|
||||
path: artifacts
|
||||
|
||||
- name: Generate checksums
|
||||
run: |
|
||||
cd artifacts
|
||||
find . -type f \( -name '*.tar.gz' -o -name '*.zip' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
|
||||
cat SHA256SUMS
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@5be0e66d93ac7ed76da52eca8bb058f665c3a5fe # v2.4.2
|
||||
with:
|
||||
tag_name: ${{ needs.version.outputs.tag }}
|
||||
name: ${{ needs.version.outputs.tag }}
|
||||
prerelease: true
|
||||
generate_release_notes: true
|
||||
files: |
|
||||
artifacts/**/*
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
docker:
|
||||
name: Push Docker Image
|
||||
needs: [version, build]
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
|
||||
|
||||
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: |
|
||||
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
|
||||
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
@@ -0,0 +1,207 @@
|
||||
name: Release Stable
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: "Stable version to release (e.g. 0.2.0)"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
concurrency:
|
||||
group: promote-release
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
packages: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
name: Validate Version
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
tag: ${{ steps.check.outputs.tag }}
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- name: Validate semver and Cargo.toml match
|
||||
id: check
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
input_version="${{ inputs.version }}"
|
||||
cargo_version=$(sed -n 's/^version = "\([^"]*\)"/\1/p' Cargo.toml | head -1)
|
||||
|
||||
if [[ ! "$input_version" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo "::error::Version must be semver (X.Y.Z). Got: ${input_version}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$cargo_version" != "$input_version" ]]; then
|
||||
echo "::error::Cargo.toml version (${cargo_version}) does not match input (${input_version}). Bump Cargo.toml first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
tag="v${input_version}"
|
||||
if git ls-remote --exit-code --tags origin "refs/tags/${tag}" >/dev/null 2>&1; then
|
||||
echo "::error::Tag ${tag} already exists."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "tag=${tag}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
web:
|
||||
name: Build Web Dashboard
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 22
|
||||
cache: npm
|
||||
cache-dependency-path: web/package-lock.json
|
||||
- name: Build web dashboard
|
||||
run: cd web && npm ci && npm run build
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
retention-days: 1
|
||||
|
||||
build:
|
||||
name: Build ${{ matrix.target }}
|
||||
needs: [validate, web]
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 40
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- os: ubuntu-22.04
|
||||
target: x86_64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
- os: ubuntu-22.04
|
||||
target: aarch64-unknown-linux-gnu
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
cross_compiler: gcc-aarch64-linux-gnu
|
||||
linker_env: CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER
|
||||
linker: aarch64-linux-gnu-gcc
|
||||
- os: macos-14
|
||||
target: aarch64-apple-darwin
|
||||
artifact: zeroclaw
|
||||
ext: tar.gz
|
||||
- os: windows-latest
|
||||
target: x86_64-pc-windows-msvc
|
||||
artifact: zeroclaw.exe
|
||||
ext: zip
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
targets: ${{ matrix.target }}
|
||||
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
|
||||
if: runner.os != 'Windows'
|
||||
|
||||
- uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: web-dist
|
||||
path: web/dist/
|
||||
|
||||
- name: Install cross compiler
|
||||
if: matrix.cross_compiler
|
||||
run: |
|
||||
sudo apt-get update -qq
|
||||
sudo apt-get install -y ${{ matrix.cross_compiler }}
|
||||
|
||||
- name: Build release
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -n "${{ matrix.linker_env || '' }}" ] && [ -n "${{ matrix.linker || '' }}" ]; then
|
||||
export "${{ matrix.linker_env }}=${{ matrix.linker }}"
|
||||
fi
|
||||
cargo build --release --locked --target ${{ matrix.target }}
|
||||
|
||||
- name: Package (Unix)
|
||||
if: runner.os != 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release
|
||||
tar czf ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
|
||||
|
||||
- name: Package (Windows)
|
||||
if: runner.os == 'Windows'
|
||||
run: |
|
||||
cd target/${{ matrix.target }}/release
|
||||
7z a ../../../zeroclaw-${{ matrix.target }}.${{ matrix.ext }} ${{ matrix.artifact }}
|
||||
|
||||
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
with:
|
||||
name: zeroclaw-${{ matrix.target }}
|
||||
path: zeroclaw-${{ matrix.target }}.${{ matrix.ext }}
|
||||
retention-days: 14
|
||||
|
||||
publish:
|
||||
name: Publish Stable Release
|
||||
needs: [validate, build]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
|
||||
with:
|
||||
pattern: zeroclaw-*
|
||||
path: artifacts
|
||||
|
||||
- name: Generate checksums
|
||||
run: |
|
||||
cd artifacts
|
||||
find . -type f \( -name '*.tar.gz' -o -name '*.zip' \) -exec sha256sum {} + | sed 's| \./[^/]*/| |' > SHA256SUMS
|
||||
cat SHA256SUMS
|
||||
|
||||
- name: Create GitHub Release
|
||||
uses: softprops/action-gh-release@5be0e66d93ac7ed76da52eca8bb058f665c3a5fe # v2.4.2
|
||||
with:
|
||||
tag_name: ${{ needs.validate.outputs.tag }}
|
||||
name: ${{ needs.validate.outputs.tag }}
|
||||
prerelease: false
|
||||
generate_release_notes: true
|
||||
files: |
|
||||
artifacts/**/*
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
docker:
|
||||
name: Push Docker Image
|
||||
needs: [validate, build]
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
|
||||
|
||||
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
|
||||
with:
|
||||
context: .
|
||||
push: true
|
||||
tags: |
|
||||
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
|
||||
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
@@ -1,54 +0,0 @@
|
||||
// Enforce ownership rules for root license files in PRs.
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const prNumber = context.payload.pull_request?.number;
|
||||
const prAuthor = context.payload.pull_request?.user?.login?.toLowerCase() || "";
|
||||
|
||||
if (!prNumber) {
|
||||
core.setFailed("Missing pull_request context.");
|
||||
return;
|
||||
}
|
||||
|
||||
const ownerAllowlist = ["willsarg"];
|
||||
|
||||
if (ownerAllowlist.length === 0) {
|
||||
core.setFailed("License owner allowlist is empty.");
|
||||
return;
|
||||
}
|
||||
|
||||
const protectedFiles = new Set(["LICENSE-APACHE", "LICENSE-MIT"]);
|
||||
const files = await github.paginate(github.rest.pulls.listFiles, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: prNumber,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const changedProtectedFiles = files
|
||||
.map((file) => file.filename)
|
||||
.filter((name) => protectedFiles.has(name));
|
||||
|
||||
if (changedProtectedFiles.length === 0) {
|
||||
core.info("No protected root license files changed in this PR.");
|
||||
return;
|
||||
}
|
||||
|
||||
core.info(`Protected license files changed:\n- ${changedProtectedFiles.join("\n- ")}`);
|
||||
core.info(`Allowed license file editors: ${ownerAllowlist.join(", ")}`);
|
||||
|
||||
if (!prAuthor) {
|
||||
core.setFailed("Unable to resolve PR author login.");
|
||||
return;
|
||||
}
|
||||
|
||||
if (!ownerAllowlist.includes(prAuthor)) {
|
||||
core.setFailed(
|
||||
`Root license files (${changedProtectedFiles.join(", ")}) can only be changed by ${ownerAllowlist.join(", ")}. PR author is @${prAuthor}.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
core.info(`License file edit authorized for PR author: @${prAuthor}`);
|
||||
};
|
||||
@@ -1,83 +0,0 @@
|
||||
// Extracted from ci-run.yml step: Require owner approval for workflow file changes
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const prNumber = context.payload.pull_request?.number;
|
||||
const prAuthor = context.payload.pull_request?.user?.login?.toLowerCase() || "";
|
||||
if (!prNumber) {
|
||||
core.setFailed("Missing pull_request context.");
|
||||
return;
|
||||
}
|
||||
|
||||
const baseOwners = ["theonlyhennygod", "willsarg"];
|
||||
const configuredOwners = (process.env.WORKFLOW_OWNER_LOGINS || "")
|
||||
.split(",")
|
||||
.map((login) => login.trim().toLowerCase())
|
||||
.filter(Boolean);
|
||||
const ownerAllowlist = [...new Set([...baseOwners, ...configuredOwners])];
|
||||
|
||||
if (ownerAllowlist.length === 0) {
|
||||
core.setFailed("Workflow owner allowlist is empty.");
|
||||
return;
|
||||
}
|
||||
|
||||
core.info(`Workflow owner allowlist: ${ownerAllowlist.join(", ")}`);
|
||||
|
||||
const files = await github.paginate(github.rest.pulls.listFiles, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: prNumber,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const workflowFiles = files
|
||||
.map((file) => file.filename)
|
||||
.filter((name) => name.startsWith(".github/workflows/"));
|
||||
|
||||
if (workflowFiles.length === 0) {
|
||||
core.info("No workflow files changed in this PR.");
|
||||
return;
|
||||
}
|
||||
|
||||
core.info(`Workflow files changed:\n- ${workflowFiles.join("\n- ")}`);
|
||||
|
||||
if (prAuthor && ownerAllowlist.includes(prAuthor)) {
|
||||
core.info(`Workflow PR authored by allowlisted owner: @${prAuthor}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const reviews = await github.paginate(github.rest.pulls.listReviews, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: prNumber,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const latestReviewByUser = new Map();
|
||||
for (const review of reviews) {
|
||||
const login = review.user?.login;
|
||||
if (!login) continue;
|
||||
latestReviewByUser.set(login.toLowerCase(), review.state);
|
||||
}
|
||||
|
||||
const approvedUsers = [...latestReviewByUser.entries()]
|
||||
.filter(([, state]) => state === "APPROVED")
|
||||
.map(([login]) => login);
|
||||
|
||||
if (approvedUsers.length === 0) {
|
||||
core.setFailed("Workflow files changed but no approving review is present.");
|
||||
return;
|
||||
}
|
||||
|
||||
const ownerApprover = approvedUsers.find((login) => ownerAllowlist.includes(login));
|
||||
if (!ownerApprover) {
|
||||
core.setFailed(
|
||||
`Workflow files changed. Approvals found (${approvedUsers.join(", ")}), but none match workflow owner allowlist.`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
core.info(`Workflow owner approval present: @${ownerApprover}`);
|
||||
|
||||
};
|
||||
@@ -1,90 +0,0 @@
|
||||
// Post actionable lint failure summary as a PR comment.
|
||||
// Used by the lint-feedback CI job via actions/github-script.
|
||||
//
|
||||
// Required environment variables:
|
||||
// RUST_CHANGED — "true" if Rust files changed
|
||||
// DOCS_CHANGED — "true" if docs files changed
|
||||
// LINT_RESULT — result of the lint job
|
||||
// LINT_DELTA_RESULT — result of the strict delta lint job
|
||||
// DOCS_RESULT — result of the docs-quality job
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const issueNumber = context.payload.pull_request?.number;
|
||||
if (!issueNumber) return;
|
||||
|
||||
const marker = "<!-- ci-lint-feedback -->";
|
||||
const rustChanged = process.env.RUST_CHANGED === "true";
|
||||
const docsChanged = process.env.DOCS_CHANGED === "true";
|
||||
const lintResult = process.env.LINT_RESULT || "skipped";
|
||||
const lintDeltaResult = process.env.LINT_DELTA_RESULT || "skipped";
|
||||
const docsResult = process.env.DOCS_RESULT || "skipped";
|
||||
|
||||
const failures = [];
|
||||
if (rustChanged && !["success", "skipped"].includes(lintResult)) {
|
||||
failures.push("`Lint Gate (Format + Clippy)` failed.");
|
||||
}
|
||||
if (rustChanged && !["success", "skipped"].includes(lintDeltaResult)) {
|
||||
failures.push("`Lint Gate (Strict Delta)` failed.");
|
||||
}
|
||||
if (docsChanged && !["success", "skipped"].includes(docsResult)) {
|
||||
failures.push("`Docs Quality` failed.");
|
||||
}
|
||||
|
||||
const comments = await github.paginate(github.rest.issues.listComments, {
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
per_page: 100,
|
||||
});
|
||||
const existing = comments.find((comment) => (comment.body || "").includes(marker));
|
||||
|
||||
if (failures.length === 0) {
|
||||
if (existing) {
|
||||
await github.rest.issues.deleteComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
});
|
||||
}
|
||||
core.info("No lint/docs gate failures. No feedback comment required.");
|
||||
return;
|
||||
}
|
||||
|
||||
const runUrl = `${context.serverUrl}/${owner}/${repo}/actions/runs/${context.runId}`;
|
||||
const body = [
|
||||
marker,
|
||||
"### CI lint feedback",
|
||||
"",
|
||||
"This PR failed one or more fast lint/documentation gates:",
|
||||
"",
|
||||
...failures.map((item) => `- ${item}`),
|
||||
"",
|
||||
"Open the failing logs in this run:",
|
||||
`- ${runUrl}`,
|
||||
"",
|
||||
"Local fix commands:",
|
||||
"- `./scripts/ci/rust_quality_gate.sh`",
|
||||
"- `./scripts/ci/rust_strict_delta_gate.sh`",
|
||||
"- `./scripts/ci/docs_quality_gate.sh`",
|
||||
"",
|
||||
"After fixes, push a new commit and CI will re-run automatically.",
|
||||
].join("\n");
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
body,
|
||||
});
|
||||
}
|
||||
};
|
||||
@@ -1,132 +0,0 @@
|
||||
// Extracted from pr-auto-response.yml step: Apply contributor tier label for issue author
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const issue = context.payload.issue;
|
||||
const pullRequest = context.payload.pull_request;
|
||||
const target = issue ?? pullRequest;
|
||||
async function loadContributorTierPolicy() {
|
||||
const policyPath = process.env.LABEL_POLICY_PATH || ".github/label-policy.json";
|
||||
const fallback = {
|
||||
contributorTierColor: "2ED9FF",
|
||||
contributorTierRules: [
|
||||
{ label: "distinguished contributor", minMergedPRs: 50 },
|
||||
{ label: "principal contributor", minMergedPRs: 20 },
|
||||
{ label: "experienced contributor", minMergedPRs: 10 },
|
||||
{ label: "trusted contributor", minMergedPRs: 5 },
|
||||
],
|
||||
};
|
||||
try {
|
||||
const { data } = await github.rest.repos.getContent({
|
||||
owner,
|
||||
repo,
|
||||
path: policyPath,
|
||||
ref: context.payload.repository?.default_branch || "main",
|
||||
});
|
||||
const json = JSON.parse(Buffer.from(data.content, "base64").toString("utf8"));
|
||||
const contributorTierRules = (json.contributor_tiers || []).map((entry) => ({
|
||||
label: String(entry.label || "").trim(),
|
||||
minMergedPRs: Number(entry.min_merged_prs || 0),
|
||||
}));
|
||||
const contributorTierColor = String(json.contributor_tier_color || "").toUpperCase();
|
||||
if (!contributorTierColor || contributorTierRules.length === 0) {
|
||||
return fallback;
|
||||
}
|
||||
return { contributorTierColor, contributorTierRules };
|
||||
} catch (error) {
|
||||
core.warning(`failed to load ${policyPath}, using fallback policy: ${error.message}`);
|
||||
return fallback;
|
||||
}
|
||||
}
|
||||
|
||||
const { contributorTierColor, contributorTierRules } = await loadContributorTierPolicy();
|
||||
const contributorTierLabels = contributorTierRules.map((rule) => rule.label);
|
||||
const managedContributorLabels = new Set(contributorTierLabels);
|
||||
const action = context.payload.action;
|
||||
const changedLabel = context.payload.label?.name;
|
||||
|
||||
if (!target) return;
|
||||
if ((action === "labeled" || action === "unlabeled") && !managedContributorLabels.has(changedLabel)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const author = target.user;
|
||||
if (!author || author.type === "Bot") return;
|
||||
|
||||
function contributorTierDescription(rule) {
|
||||
return `Contributor with ${rule.minMergedPRs}+ merged PRs.`;
|
||||
}
|
||||
|
||||
async function ensureContributorTierLabels() {
|
||||
for (const rule of contributorTierRules) {
|
||||
const label = rule.label;
|
||||
const expectedDescription = contributorTierDescription(rule);
|
||||
try {
|
||||
const { data: existing } = await github.rest.issues.getLabel({ owner, repo, name: label });
|
||||
const currentColor = (existing.color || "").toUpperCase();
|
||||
const currentDescription = (existing.description || "").trim();
|
||||
if (currentColor !== contributorTierColor || currentDescription !== expectedDescription) {
|
||||
await github.rest.issues.updateLabel({
|
||||
owner,
|
||||
repo,
|
||||
name: label,
|
||||
new_name: label,
|
||||
color: contributorTierColor,
|
||||
description: expectedDescription,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
if (error.status !== 404) throw error;
|
||||
await github.rest.issues.createLabel({
|
||||
owner,
|
||||
repo,
|
||||
name: label,
|
||||
color: contributorTierColor,
|
||||
description: expectedDescription,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function selectContributorTier(mergedCount) {
|
||||
const matchedTier = contributorTierRules.find((rule) => mergedCount >= rule.minMergedPRs);
|
||||
return matchedTier ? matchedTier.label : null;
|
||||
}
|
||||
|
||||
let contributorTierLabel = null;
|
||||
try {
|
||||
const { data: mergedSearch } = await github.rest.search.issuesAndPullRequests({
|
||||
q: `repo:${owner}/${repo} is:pr is:merged author:${author.login}`,
|
||||
per_page: 1,
|
||||
});
|
||||
const mergedCount = mergedSearch.total_count || 0;
|
||||
contributorTierLabel = selectContributorTier(mergedCount);
|
||||
} catch (error) {
|
||||
core.warning(`failed to evaluate contributor tier status: ${error.message}`);
|
||||
return;
|
||||
}
|
||||
|
||||
await ensureContributorTierLabels();
|
||||
|
||||
const { data: currentLabels } = await github.rest.issues.listLabelsOnIssue({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: target.number,
|
||||
});
|
||||
const keepLabels = currentLabels
|
||||
.map((label) => label.name)
|
||||
.filter((label) => !contributorTierLabels.includes(label));
|
||||
|
||||
if (contributorTierLabel) {
|
||||
keepLabels.push(contributorTierLabel);
|
||||
}
|
||||
|
||||
await github.rest.issues.setLabels({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: target.number,
|
||||
labels: [...new Set(keepLabels)],
|
||||
});
|
||||
|
||||
};
|
||||
@@ -1,94 +0,0 @@
|
||||
// Extracted from pr-auto-response.yml step: Handle label-driven responses
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const label = context.payload.label?.name;
|
||||
if (!label) return;
|
||||
|
||||
const issue = context.payload.issue;
|
||||
const pullRequest = context.payload.pull_request;
|
||||
const target = issue ?? pullRequest;
|
||||
if (!target) return;
|
||||
|
||||
const isIssue = Boolean(issue);
|
||||
const issueNumber = target.number;
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
|
||||
const rules = [
|
||||
{
|
||||
label: "r:support",
|
||||
close: true,
|
||||
closeIssuesOnly: true,
|
||||
closeReason: "not_planned",
|
||||
message:
|
||||
"This looks like a usage/support request. Please use README + docs first, then open a focused bug with repro details if behavior is incorrect.",
|
||||
},
|
||||
{
|
||||
label: "r:needs-repro",
|
||||
close: false,
|
||||
message:
|
||||
"Thanks for the report. Please add deterministic repro steps, exact environment, and redacted logs so maintainers can triage quickly.",
|
||||
},
|
||||
{
|
||||
label: "invalid",
|
||||
close: true,
|
||||
closeIssuesOnly: true,
|
||||
closeReason: "not_planned",
|
||||
message:
|
||||
"Closing as invalid based on current information. If this is still relevant, open a new issue with updated evidence and reproducible steps.",
|
||||
},
|
||||
{
|
||||
label: "duplicate",
|
||||
close: true,
|
||||
closeIssuesOnly: true,
|
||||
closeReason: "not_planned",
|
||||
message:
|
||||
"Closing as duplicate. Please continue discussion in the canonical linked issue/PR.",
|
||||
},
|
||||
];
|
||||
|
||||
const rule = rules.find((entry) => entry.label === label);
|
||||
if (!rule) return;
|
||||
|
||||
const marker = `<!-- auto-response:${rule.label} -->`;
|
||||
const comments = await github.paginate(github.rest.issues.listComments, {
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const alreadyCommented = comments.some((comment) =>
|
||||
(comment.body || "").includes(marker)
|
||||
);
|
||||
|
||||
if (!alreadyCommented) {
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
body: `${rule.message}\n\n${marker}`,
|
||||
});
|
||||
}
|
||||
|
||||
if (!rule.close) return;
|
||||
if (rule.closeIssuesOnly && !isIssue) return;
|
||||
if (target.state === "closed") return;
|
||||
|
||||
if (isIssue) {
|
||||
await github.rest.issues.update({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
state: "closed",
|
||||
state_reason: rule.closeReason || "not_planned",
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.update({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: issueNumber,
|
||||
state: "closed",
|
||||
});
|
||||
}
|
||||
};
|
||||
@@ -1,161 +0,0 @@
|
||||
// Extracted from pr-check-status.yml step: Nudge PRs that need rebase or CI refresh
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const staleHours = Number(process.env.STALE_HOURS || "48");
|
||||
const ignoreLabels = new Set(["no-stale", "stale", "maintainer", "no-pr-hygiene"]);
|
||||
const marker = "<!-- pr-hygiene-nudge -->";
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
|
||||
const openPrs = await github.paginate(github.rest.pulls.list, {
|
||||
owner,
|
||||
repo,
|
||||
state: "open",
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const activePrs = openPrs.filter((pr) => {
|
||||
if (pr.draft) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const labels = new Set((pr.labels || []).map((label) => label.name));
|
||||
return ![...ignoreLabels].some((label) => labels.has(label));
|
||||
});
|
||||
|
||||
core.info(`Scanning ${activePrs.length} open PR(s) for hygiene nudges.`);
|
||||
|
||||
let nudged = 0;
|
||||
let skipped = 0;
|
||||
|
||||
for (const pr of activePrs) {
|
||||
const { data: headCommit } = await github.rest.repos.getCommit({
|
||||
owner,
|
||||
repo,
|
||||
ref: pr.head.sha,
|
||||
});
|
||||
|
||||
const headCommitAt =
|
||||
headCommit.commit?.committer?.date || headCommit.commit?.author?.date;
|
||||
if (!headCommitAt) {
|
||||
skipped += 1;
|
||||
core.info(`#${pr.number}: missing head commit timestamp, skipping.`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const ageHours = (Date.now() - new Date(headCommitAt).getTime()) / 3600000;
|
||||
if (ageHours < staleHours) {
|
||||
skipped += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
const { data: prDetail } = await github.rest.pulls.get({
|
||||
owner,
|
||||
repo,
|
||||
pull_number: pr.number,
|
||||
});
|
||||
|
||||
const isBehindBase = prDetail.mergeable_state === "behind";
|
||||
|
||||
const { data: checkRunsData } = await github.rest.checks.listForRef({
|
||||
owner,
|
||||
repo,
|
||||
ref: pr.head.sha,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const ciGateRuns = (checkRunsData.check_runs || [])
|
||||
.filter((run) => run.name === "CI Required Gate")
|
||||
.sort((a, b) => {
|
||||
const aTime = new Date(a.started_at || a.completed_at || a.created_at).getTime();
|
||||
const bTime = new Date(b.started_at || b.completed_at || b.created_at).getTime();
|
||||
return bTime - aTime;
|
||||
});
|
||||
|
||||
let ciState = "missing";
|
||||
if (ciGateRuns.length > 0) {
|
||||
const latest = ciGateRuns[0];
|
||||
if (latest.status !== "completed") {
|
||||
ciState = "in_progress";
|
||||
} else if (["success", "neutral", "skipped"].includes(latest.conclusion || "")) {
|
||||
ciState = "success";
|
||||
} else {
|
||||
ciState = String(latest.conclusion || "failure");
|
||||
}
|
||||
}
|
||||
|
||||
const ciMissing = ciState === "missing";
|
||||
const ciFailing = !["success", "in_progress", "missing"].includes(ciState);
|
||||
|
||||
if (!isBehindBase && !ciMissing && !ciFailing) {
|
||||
skipped += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
const reasons = [];
|
||||
if (isBehindBase) {
|
||||
reasons.push("- Branch is behind `main` (please rebase or merge the latest base branch).");
|
||||
}
|
||||
if (ciMissing) {
|
||||
reasons.push("- No `CI Required Gate` run was found for the current head commit.");
|
||||
}
|
||||
if (ciFailing) {
|
||||
reasons.push(`- Latest \`CI Required Gate\` result is \`${ciState}\`.`);
|
||||
}
|
||||
|
||||
const shortSha = pr.head.sha.slice(0, 12);
|
||||
const body = [
|
||||
marker,
|
||||
`Hi @${pr.user.login}, friendly automation nudge from PR hygiene.`,
|
||||
"",
|
||||
`This PR has had no new commits for **${Math.floor(ageHours)}h** and still needs an update before merge:`,
|
||||
"",
|
||||
...reasons,
|
||||
"",
|
||||
"### Recommended next steps",
|
||||
"1. Rebase your branch on `main`.",
|
||||
"2. Push the updated branch and re-run checks (or use **Re-run failed jobs**).",
|
||||
"3. Post fresh validation output in this PR thread.",
|
||||
"",
|
||||
"Maintainers: apply `no-stale` to opt out for accepted-but-blocked work.",
|
||||
`Head SHA: \`${shortSha}\``,
|
||||
].join("\n");
|
||||
|
||||
const { data: comments } = await github.rest.issues.listComments({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const existing = comments.find(
|
||||
(comment) => comment.user?.type === "Bot" && comment.body?.includes(marker),
|
||||
);
|
||||
|
||||
if (existing) {
|
||||
if (existing.body === body) {
|
||||
skipped += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
await github.rest.issues.updateComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
body,
|
||||
});
|
||||
}
|
||||
|
||||
nudged += 1;
|
||||
core.info(`#${pr.number}: hygiene nudge posted/updated.`);
|
||||
}
|
||||
|
||||
core.info(`Done. Nudged=${nudged}, skipped=${skipped}`);
|
||||
};
|
||||
@@ -1,204 +0,0 @@
|
||||
// Run safe intake checks for PR events and maintain a single sticky comment.
|
||||
// Used by .github/workflows/pr-intake-checks.yml via actions/github-script.
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const pr = context.payload.pull_request;
|
||||
if (!pr) return;
|
||||
const prAuthor = (pr.user?.login || "").toLowerCase();
|
||||
const prBaseRef = pr.base?.ref || "";
|
||||
|
||||
const marker = "<!-- pr-intake-checks -->";
|
||||
const legacyMarker = "<!-- pr-intake-sanity -->";
|
||||
const requiredSections = [
|
||||
"## Summary",
|
||||
"## Validation Evidence (required)",
|
||||
"## Security Impact (required)",
|
||||
"## Privacy and Data Hygiene (required)",
|
||||
"## Rollback Plan (required)",
|
||||
];
|
||||
const body = pr.body || "";
|
||||
|
||||
const missingSections = requiredSections.filter((section) => !body.includes(section));
|
||||
const missingFields = [];
|
||||
const requiredFieldChecks = [
|
||||
["summary problem", /- Problem:\s*\S+/m],
|
||||
["summary why it matters", /- Why it matters:\s*\S+/m],
|
||||
["summary what changed", /- What changed:\s*\S+/m],
|
||||
["validation commands", /Commands and result summary:\s*[\s\S]*```/m],
|
||||
["security risk/mitigation", /- New permissions\/capabilities\?\s*\(`Yes\/No`\):\s*\S+/m],
|
||||
["privacy status", /- Data-hygiene status\s*\(`pass\|needs-follow-up`\):\s*\S+/m],
|
||||
["rollback plan", /- Fast rollback command\/path:\s*\S+/m],
|
||||
];
|
||||
for (const [name, pattern] of requiredFieldChecks) {
|
||||
if (!pattern.test(body)) {
|
||||
missingFields.push(name);
|
||||
}
|
||||
}
|
||||
|
||||
const files = await github.paginate(github.rest.pulls.listFiles, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: pr.number,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const formatWarnings = [];
|
||||
const dangerousProblems = [];
|
||||
for (const file of files) {
|
||||
const patch = file.patch || "";
|
||||
if (!patch) continue;
|
||||
const lines = patch.split("\n");
|
||||
for (let idx = 0; idx < lines.length; idx += 1) {
|
||||
const line = lines[idx];
|
||||
if (!line.startsWith("+") || line.startsWith("+++")) continue;
|
||||
const added = line.slice(1);
|
||||
const lineNo = idx + 1;
|
||||
if (/\t/.test(added)) {
|
||||
formatWarnings.push(`${file.filename}:patch#${lineNo} contains tab characters`);
|
||||
}
|
||||
if (/[ \t]+$/.test(added)) {
|
||||
formatWarnings.push(`${file.filename}:patch#${lineNo} contains trailing whitespace`);
|
||||
}
|
||||
if (/^(<<<<<<<|=======|>>>>>>>)/.test(added)) {
|
||||
dangerousProblems.push(`${file.filename}:patch#${lineNo} contains merge conflict markers`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const workflowFilesChanged = files
|
||||
.map((file) => file.filename)
|
||||
.filter((name) => name.startsWith(".github/workflows/"));
|
||||
|
||||
const advisoryFindings = [];
|
||||
const blockingFindings = [];
|
||||
if (missingSections.length > 0) {
|
||||
advisoryFindings.push(`Missing required PR template sections: ${missingSections.join(", ")}`);
|
||||
}
|
||||
if (missingFields.length > 0) {
|
||||
advisoryFindings.push(`Incomplete required PR template fields: ${missingFields.join(", ")}`);
|
||||
}
|
||||
if (formatWarnings.length > 0) {
|
||||
advisoryFindings.push(`Formatting issues in added lines (${formatWarnings.length})`);
|
||||
}
|
||||
if (dangerousProblems.length > 0) {
|
||||
blockingFindings.push(`Dangerous patch markers found (${dangerousProblems.length})`);
|
||||
}
|
||||
const promotionAuthorAllowlist = new Set(["willsarg", "theonlyhennygod"]);
|
||||
const shouldRetargetToDev =
|
||||
prBaseRef === "main" && !promotionAuthorAllowlist.has(prAuthor);
|
||||
|
||||
if (shouldRetargetToDev) {
|
||||
advisoryFindings.push(
|
||||
"This PR targets `main`, but normal contributions must target `dev`. Retarget this PR to `dev` unless this is an authorized promotion PR.",
|
||||
);
|
||||
}
|
||||
|
||||
const comments = await github.paginate(github.rest.issues.listComments, {
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
per_page: 100,
|
||||
});
|
||||
const existing = comments.find((comment) => {
|
||||
const body = comment.body || "";
|
||||
return body.includes(marker) || body.includes(legacyMarker);
|
||||
});
|
||||
|
||||
if (advisoryFindings.length === 0 && blockingFindings.length === 0) {
|
||||
if (existing) {
|
||||
await github.rest.issues.deleteComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
});
|
||||
}
|
||||
core.info("PR intake sanity checks passed.");
|
||||
return;
|
||||
}
|
||||
|
||||
const runUrl = `${context.serverUrl}/${owner}/${repo}/actions/runs/${context.runId}`;
|
||||
const advisoryDetails = [];
|
||||
if (formatWarnings.length > 0) {
|
||||
advisoryDetails.push(...formatWarnings.slice(0, 20).map((entry) => `- ${entry}`));
|
||||
if (formatWarnings.length > 20) {
|
||||
advisoryDetails.push(`- ...and ${formatWarnings.length - 20} more issue(s)`);
|
||||
}
|
||||
}
|
||||
const blockingDetails = [];
|
||||
if (dangerousProblems.length > 0) {
|
||||
blockingDetails.push(...dangerousProblems.slice(0, 20).map((entry) => `- ${entry}`));
|
||||
if (dangerousProblems.length > 20) {
|
||||
blockingDetails.push(`- ...and ${dangerousProblems.length - 20} more issue(s)`);
|
||||
}
|
||||
}
|
||||
|
||||
const isBlocking = blockingFindings.length > 0;
|
||||
|
||||
const ownerApprovalNote = workflowFilesChanged.length > 0
|
||||
? [
|
||||
"",
|
||||
"Workflow files changed in this PR:",
|
||||
...workflowFilesChanged.map((name) => `- \`${name}\``),
|
||||
"",
|
||||
"Reminder: workflow changes require owner approval via `CI Required Gate`.",
|
||||
].join("\n")
|
||||
: "";
|
||||
|
||||
const commentBody = [
|
||||
marker,
|
||||
isBlocking
|
||||
? "### PR intake checks failed (blocking)"
|
||||
: "### PR intake checks found warnings (non-blocking)",
|
||||
"",
|
||||
isBlocking
|
||||
? "Fast safe checks found blocking safety issues:"
|
||||
: "Fast safe checks found advisory issues. CI lint/test/build gates still enforce merge quality.",
|
||||
...(blockingFindings.length > 0 ? blockingFindings.map((entry) => `- ${entry}`) : []),
|
||||
...(advisoryFindings.length > 0 ? advisoryFindings.map((entry) => `- ${entry}`) : []),
|
||||
"",
|
||||
"Action items:",
|
||||
"1. Complete required PR template sections/fields.",
|
||||
"2. Remove tabs, trailing whitespace, and merge conflict markers from added lines.",
|
||||
"3. Re-run local checks before pushing:",
|
||||
" - `./scripts/ci/rust_quality_gate.sh`",
|
||||
" - `./scripts/ci/rust_strict_delta_gate.sh`",
|
||||
" - `./scripts/ci/docs_quality_gate.sh`",
|
||||
...(shouldRetargetToDev
|
||||
? ["4. Retarget this PR base branch from `main` to `dev`."]
|
||||
: []),
|
||||
"",
|
||||
`Run logs: ${runUrl}`,
|
||||
"",
|
||||
"Detected blocking line issues (sample):",
|
||||
...(blockingDetails.length > 0 ? blockingDetails : ["- none"]),
|
||||
"",
|
||||
"Detected advisory line issues (sample):",
|
||||
...(advisoryDetails.length > 0 ? advisoryDetails : ["- none"]),
|
||||
ownerApprovalNote,
|
||||
].join("\n");
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner,
|
||||
repo,
|
||||
comment_id: existing.id,
|
||||
body: commentBody,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
body: commentBody,
|
||||
});
|
||||
}
|
||||
|
||||
if (isBlocking) {
|
||||
core.setFailed("PR intake sanity checks found blocking issues. See sticky comment for details.");
|
||||
return;
|
||||
}
|
||||
|
||||
core.info("PR intake sanity checks found advisory issues only.");
|
||||
};
|
||||
@@ -1,805 +0,0 @@
|
||||
// Apply managed PR labels (size/risk/path/module/contributor tiers).
|
||||
// Extracted from pr-labeler workflow inline github-script for maintainability.
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const pr = context.payload.pull_request;
|
||||
const owner = context.repo.owner;
|
||||
const repo = context.repo.repo;
|
||||
const action = context.payload.action;
|
||||
const changedLabel = context.payload.label?.name;
|
||||
|
||||
const sizeLabels = ["size: XS", "size: S", "size: M", "size: L", "size: XL"];
|
||||
const computedRiskLabels = ["risk: low", "risk: medium", "risk: high"];
|
||||
const manualRiskOverrideLabel = "risk: manual";
|
||||
const managedEnforcedLabels = new Set([
|
||||
...sizeLabels,
|
||||
manualRiskOverrideLabel,
|
||||
...computedRiskLabels,
|
||||
]);
|
||||
if ((action === "labeled" || action === "unlabeled") && !managedEnforcedLabels.has(changedLabel)) {
|
||||
core.info(`skip non-size/risk label event: ${changedLabel || "unknown"}`);
|
||||
return;
|
||||
}
|
||||
|
||||
async function loadContributorTierPolicy() {
|
||||
const policyPath = process.env.LABEL_POLICY_PATH || ".github/label-policy.json";
|
||||
const fallback = {
|
||||
contributorTierColor: "2ED9FF",
|
||||
contributorTierRules: [
|
||||
{ label: "distinguished contributor", minMergedPRs: 50 },
|
||||
{ label: "principal contributor", minMergedPRs: 20 },
|
||||
{ label: "experienced contributor", minMergedPRs: 10 },
|
||||
{ label: "trusted contributor", minMergedPRs: 5 },
|
||||
],
|
||||
};
|
||||
try {
|
||||
const { data } = await github.rest.repos.getContent({
|
||||
owner,
|
||||
repo,
|
||||
path: policyPath,
|
||||
ref: context.payload.repository?.default_branch || "main",
|
||||
});
|
||||
const json = JSON.parse(Buffer.from(data.content, "base64").toString("utf8"));
|
||||
const contributorTierRules = (json.contributor_tiers || []).map((entry) => ({
|
||||
label: String(entry.label || "").trim(),
|
||||
minMergedPRs: Number(entry.min_merged_prs || 0),
|
||||
}));
|
||||
const contributorTierColor = String(json.contributor_tier_color || "").toUpperCase();
|
||||
if (!contributorTierColor || contributorTierRules.length === 0) {
|
||||
return fallback;
|
||||
}
|
||||
return { contributorTierColor, contributorTierRules };
|
||||
} catch (error) {
|
||||
core.warning(`failed to load ${policyPath}, using fallback policy: ${error.message}`);
|
||||
return fallback;
|
||||
}
|
||||
}
|
||||
|
||||
const { contributorTierColor, contributorTierRules } = await loadContributorTierPolicy();
|
||||
const contributorTierLabels = contributorTierRules.map((rule) => rule.label);
|
||||
|
||||
const managedPathLabels = [
|
||||
"docs",
|
||||
"dependencies",
|
||||
"ci",
|
||||
"core",
|
||||
"agent",
|
||||
"channel",
|
||||
"config",
|
||||
"cron",
|
||||
"daemon",
|
||||
"doctor",
|
||||
"gateway",
|
||||
"health",
|
||||
"heartbeat",
|
||||
"integration",
|
||||
"memory",
|
||||
"observability",
|
||||
"onboard",
|
||||
"provider",
|
||||
"runtime",
|
||||
"security",
|
||||
"service",
|
||||
"skillforge",
|
||||
"skills",
|
||||
"tool",
|
||||
"tunnel",
|
||||
"tests",
|
||||
"scripts",
|
||||
"dev",
|
||||
];
|
||||
const managedPathLabelSet = new Set(managedPathLabels);
|
||||
|
||||
const moduleNamespaceRules = [
|
||||
{ root: "src/agent/", prefix: "agent", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/channels/", prefix: "channel", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/config/", prefix: "config", coreEntries: new Set(["mod.rs", "schema.rs"]) },
|
||||
{ root: "src/cron/", prefix: "cron", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/daemon/", prefix: "daemon", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/doctor/", prefix: "doctor", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/gateway/", prefix: "gateway", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/health/", prefix: "health", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/heartbeat/", prefix: "heartbeat", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/integrations/", prefix: "integration", coreEntries: new Set(["mod.rs", "registry.rs"]) },
|
||||
{ root: "src/memory/", prefix: "memory", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/observability/", prefix: "observability", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/onboard/", prefix: "onboard", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/providers/", prefix: "provider", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/runtime/", prefix: "runtime", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/security/", prefix: "security", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/service/", prefix: "service", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/skillforge/", prefix: "skillforge", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/skills/", prefix: "skills", coreEntries: new Set(["mod.rs"]) },
|
||||
{ root: "src/tools/", prefix: "tool", coreEntries: new Set(["mod.rs", "traits.rs"]) },
|
||||
{ root: "src/tunnel/", prefix: "tunnel", coreEntries: new Set(["mod.rs"]) },
|
||||
];
|
||||
const managedModulePrefixes = [...new Set(moduleNamespaceRules.map((rule) => `${rule.prefix}:`))];
|
||||
const orderedOtherLabelStyles = [
|
||||
{ label: "health", color: "8EC9B8" },
|
||||
{ label: "tool", color: "7FC4B6" },
|
||||
{ label: "agent", color: "86C4A2" },
|
||||
{ label: "memory", color: "8FCB99" },
|
||||
{ label: "channel", color: "7EB6F2" },
|
||||
{ label: "service", color: "95C7B6" },
|
||||
{ label: "integration", color: "8DC9AE" },
|
||||
{ label: "tunnel", color: "9FC8B3" },
|
||||
{ label: "config", color: "AABCD0" },
|
||||
{ label: "observability", color: "84C9D0" },
|
||||
{ label: "docs", color: "8FBBE0" },
|
||||
{ label: "dev", color: "B9C1CC" },
|
||||
{ label: "tests", color: "9DC8C7" },
|
||||
{ label: "skills", color: "BFC89B" },
|
||||
{ label: "skillforge", color: "C9C39B" },
|
||||
{ label: "provider", color: "958DF0" },
|
||||
{ label: "runtime", color: "A3ADD8" },
|
||||
{ label: "heartbeat", color: "C0C88D" },
|
||||
{ label: "daemon", color: "C8C498" },
|
||||
{ label: "doctor", color: "C1CF9D" },
|
||||
{ label: "onboard", color: "D2BF86" },
|
||||
{ label: "cron", color: "D2B490" },
|
||||
{ label: "ci", color: "AEB4CE" },
|
||||
{ label: "dependencies", color: "9FB1DE" },
|
||||
{ label: "gateway", color: "B5A8E5" },
|
||||
{ label: "security", color: "E58D85" },
|
||||
{ label: "core", color: "C8A99B" },
|
||||
{ label: "scripts", color: "C9B49F" },
|
||||
];
|
||||
const otherLabelDisplayOrder = orderedOtherLabelStyles.map((entry) => entry.label);
|
||||
const modulePrefixSet = new Set(moduleNamespaceRules.map((rule) => rule.prefix));
|
||||
const modulePrefixPriority = otherLabelDisplayOrder.filter((label) => modulePrefixSet.has(label));
|
||||
const pathLabelPriority = [...otherLabelDisplayOrder];
|
||||
const riskDisplayOrder = ["risk: high", "risk: medium", "risk: low", "risk: manual"];
|
||||
const sizeDisplayOrder = ["size: XS", "size: S", "size: M", "size: L", "size: XL"];
|
||||
const contributorDisplayOrder = [
|
||||
"distinguished contributor",
|
||||
"principal contributor",
|
||||
"experienced contributor",
|
||||
"trusted contributor",
|
||||
];
|
||||
const modulePrefixPriorityIndex = new Map(
|
||||
modulePrefixPriority.map((prefix, index) => [prefix, index])
|
||||
);
|
||||
const pathLabelPriorityIndex = new Map(
|
||||
pathLabelPriority.map((label, index) => [label, index])
|
||||
);
|
||||
const riskPriorityIndex = new Map(
|
||||
riskDisplayOrder.map((label, index) => [label, index])
|
||||
);
|
||||
const sizePriorityIndex = new Map(
|
||||
sizeDisplayOrder.map((label, index) => [label, index])
|
||||
);
|
||||
const contributorPriorityIndex = new Map(
|
||||
contributorDisplayOrder.map((label, index) => [label, index])
|
||||
);
|
||||
|
||||
const otherLabelColors = Object.fromEntries(
|
||||
orderedOtherLabelStyles.map((entry) => [entry.label, entry.color])
|
||||
);
|
||||
const staticLabelColors = {
|
||||
"size: XS": "E7CDD3",
|
||||
"size: S": "E1BEC7",
|
||||
"size: M": "DBB0BB",
|
||||
"size: L": "D4A2AF",
|
||||
"size: XL": "CE94A4",
|
||||
"risk: low": "97D3A6",
|
||||
"risk: medium": "E4C47B",
|
||||
"risk: high": "E98E88",
|
||||
"risk: manual": "B7A4E0",
|
||||
...otherLabelColors,
|
||||
};
|
||||
const staticLabelDescriptions = {
|
||||
"size: XS": "Auto size: <=80 non-doc changed lines.",
|
||||
"size: S": "Auto size: 81-250 non-doc changed lines.",
|
||||
"size: M": "Auto size: 251-500 non-doc changed lines.",
|
||||
"size: L": "Auto size: 501-1000 non-doc changed lines.",
|
||||
"size: XL": "Auto size: >1000 non-doc changed lines.",
|
||||
"risk: low": "Auto risk: docs/chore-only paths.",
|
||||
"risk: medium": "Auto risk: src/** or dependency/config changes.",
|
||||
"risk: high": "Auto risk: security/runtime/gateway/tools/workflows.",
|
||||
"risk: manual": "Maintainer override: keep selected risk label.",
|
||||
docs: "Auto scope: docs/markdown/template files changed.",
|
||||
dependencies: "Auto scope: dependency manifest/lock/policy changed.",
|
||||
ci: "Auto scope: CI/workflow/hook files changed.",
|
||||
core: "Auto scope: root src/*.rs files changed.",
|
||||
agent: "Auto scope: src/agent/** changed.",
|
||||
channel: "Auto scope: src/channels/** changed.",
|
||||
config: "Auto scope: src/config/** changed.",
|
||||
cron: "Auto scope: src/cron/** changed.",
|
||||
daemon: "Auto scope: src/daemon/** changed.",
|
||||
doctor: "Auto scope: src/doctor/** changed.",
|
||||
gateway: "Auto scope: src/gateway/** changed.",
|
||||
health: "Auto scope: src/health/** changed.",
|
||||
heartbeat: "Auto scope: src/heartbeat/** changed.",
|
||||
integration: "Auto scope: src/integrations/** changed.",
|
||||
memory: "Auto scope: src/memory/** changed.",
|
||||
observability: "Auto scope: src/observability/** changed.",
|
||||
onboard: "Auto scope: src/onboard/** changed.",
|
||||
provider: "Auto scope: src/providers/** changed.",
|
||||
runtime: "Auto scope: src/runtime/** changed.",
|
||||
security: "Auto scope: src/security/** changed.",
|
||||
service: "Auto scope: src/service/** changed.",
|
||||
skillforge: "Auto scope: src/skillforge/** changed.",
|
||||
skills: "Auto scope: src/skills/** changed.",
|
||||
tool: "Auto scope: src/tools/** changed.",
|
||||
tunnel: "Auto scope: src/tunnel/** changed.",
|
||||
tests: "Auto scope: tests/** changed.",
|
||||
scripts: "Auto scope: scripts/** changed.",
|
||||
dev: "Auto scope: dev/** changed.",
|
||||
};
|
||||
for (const label of contributorTierLabels) {
|
||||
staticLabelColors[label] = contributorTierColor;
|
||||
const rule = contributorTierRules.find((entry) => entry.label === label);
|
||||
if (rule) {
|
||||
staticLabelDescriptions[label] = `Contributor with ${rule.minMergedPRs}+ merged PRs.`;
|
||||
}
|
||||
}
|
||||
|
||||
const modulePrefixColors = Object.fromEntries(
|
||||
modulePrefixPriority.map((prefix) => [
|
||||
`${prefix}:`,
|
||||
otherLabelColors[prefix] || "BFDADC",
|
||||
])
|
||||
);
|
||||
|
||||
const providerKeywordHints = [
|
||||
"deepseek",
|
||||
"moonshot",
|
||||
"kimi",
|
||||
"qwen",
|
||||
"mistral",
|
||||
"doubao",
|
||||
"baichuan",
|
||||
"yi",
|
||||
"siliconflow",
|
||||
"vertex",
|
||||
"azure",
|
||||
"perplexity",
|
||||
"venice",
|
||||
"vercel",
|
||||
"cloudflare",
|
||||
"synthetic",
|
||||
"opencode",
|
||||
"zai",
|
||||
"glm",
|
||||
"minimax",
|
||||
"bedrock",
|
||||
"qianfan",
|
||||
"groq",
|
||||
"together",
|
||||
"fireworks",
|
||||
"novita",
|
||||
"cohere",
|
||||
"openai",
|
||||
"openrouter",
|
||||
"anthropic",
|
||||
"gemini",
|
||||
"ollama",
|
||||
];
|
||||
|
||||
const channelKeywordHints = [
|
||||
"telegram",
|
||||
"discord",
|
||||
"slack",
|
||||
"whatsapp",
|
||||
"matrix",
|
||||
"irc",
|
||||
"imessage",
|
||||
"email",
|
||||
"cli",
|
||||
];
|
||||
|
||||
function isDocsLike(path) {
|
||||
return (
|
||||
path.startsWith("docs/") ||
|
||||
path.endsWith(".md") ||
|
||||
path.endsWith(".mdx") ||
|
||||
path === "LICENSE" ||
|
||||
path === ".markdownlint-cli2.yaml" ||
|
||||
path === ".github/pull_request_template.md" ||
|
||||
path.startsWith(".github/ISSUE_TEMPLATE/")
|
||||
);
|
||||
}
|
||||
|
||||
function normalizeLabelSegment(segment) {
|
||||
return (segment || "")
|
||||
.toLowerCase()
|
||||
.replace(/\.rs$/g, "")
|
||||
.replace(/[^a-z0-9_-]+/g, "-")
|
||||
.replace(/^[-_]+|[-_]+$/g, "")
|
||||
.slice(0, 40);
|
||||
}
|
||||
|
||||
function containsKeyword(text, keyword) {
|
||||
const escaped = keyword.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
|
||||
const pattern = new RegExp(`(^|[^a-z0-9_])${escaped}([^a-z0-9_]|$)`, "i");
|
||||
return pattern.test(text);
|
||||
}
|
||||
|
||||
function formatModuleLabel(prefix, segment) {
|
||||
return `${prefix}: ${segment}`;
|
||||
}
|
||||
|
||||
function parseModuleLabel(label) {
|
||||
if (typeof label !== "string") return null;
|
||||
const match = label.match(/^([^:]+):\s*(.+)$/);
|
||||
if (!match) return null;
|
||||
const prefix = match[1].trim().toLowerCase();
|
||||
const segment = (match[2] || "").trim().toLowerCase();
|
||||
if (!prefix || !segment) return null;
|
||||
return { prefix, segment };
|
||||
}
|
||||
|
||||
function sortByPriority(labels, priorityIndex) {
|
||||
return [...new Set(labels)].sort((left, right) => {
|
||||
const leftPriority = priorityIndex.has(left) ? priorityIndex.get(left) : Number.MAX_SAFE_INTEGER;
|
||||
const rightPriority = priorityIndex.has(right)
|
||||
? priorityIndex.get(right)
|
||||
: Number.MAX_SAFE_INTEGER;
|
||||
if (leftPriority !== rightPriority) return leftPriority - rightPriority;
|
||||
return left.localeCompare(right);
|
||||
});
|
||||
}
|
||||
|
||||
function sortModuleLabels(labels) {
|
||||
return [...new Set(labels)].sort((left, right) => {
|
||||
const leftParsed = parseModuleLabel(left);
|
||||
const rightParsed = parseModuleLabel(right);
|
||||
if (!leftParsed || !rightParsed) return left.localeCompare(right);
|
||||
|
||||
const leftPrefixPriority = modulePrefixPriorityIndex.has(leftParsed.prefix)
|
||||
? modulePrefixPriorityIndex.get(leftParsed.prefix)
|
||||
: Number.MAX_SAFE_INTEGER;
|
||||
const rightPrefixPriority = modulePrefixPriorityIndex.has(rightParsed.prefix)
|
||||
? modulePrefixPriorityIndex.get(rightParsed.prefix)
|
||||
: Number.MAX_SAFE_INTEGER;
|
||||
|
||||
if (leftPrefixPriority !== rightPrefixPriority) {
|
||||
return leftPrefixPriority - rightPrefixPriority;
|
||||
}
|
||||
if (leftParsed.prefix !== rightParsed.prefix) {
|
||||
return leftParsed.prefix.localeCompare(rightParsed.prefix);
|
||||
}
|
||||
|
||||
const leftIsCore = leftParsed.segment === "core";
|
||||
const rightIsCore = rightParsed.segment === "core";
|
||||
if (leftIsCore !== rightIsCore) return leftIsCore ? 1 : -1;
|
||||
|
||||
return leftParsed.segment.localeCompare(rightParsed.segment);
|
||||
});
|
||||
}
|
||||
|
||||
function refineModuleLabels(rawLabels) {
|
||||
const refined = new Set(rawLabels);
|
||||
const segmentsByPrefix = new Map();
|
||||
|
||||
for (const label of rawLabels) {
|
||||
const parsed = parseModuleLabel(label);
|
||||
if (!parsed) continue;
|
||||
if (!segmentsByPrefix.has(parsed.prefix)) {
|
||||
segmentsByPrefix.set(parsed.prefix, new Set());
|
||||
}
|
||||
segmentsByPrefix.get(parsed.prefix).add(parsed.segment);
|
||||
}
|
||||
|
||||
for (const [prefix, segments] of segmentsByPrefix) {
|
||||
const hasSpecificSegment = [...segments].some((segment) => segment !== "core");
|
||||
if (hasSpecificSegment) {
|
||||
refined.delete(formatModuleLabel(prefix, "core"));
|
||||
}
|
||||
}
|
||||
|
||||
return refined;
|
||||
}
|
||||
|
||||
function compactModuleLabels(labels) {
|
||||
const groupedSegments = new Map();
|
||||
const compactedModuleLabels = new Set();
|
||||
const forcePathPrefixes = new Set();
|
||||
|
||||
for (const label of labels) {
|
||||
const parsed = parseModuleLabel(label);
|
||||
if (!parsed) {
|
||||
compactedModuleLabels.add(label);
|
||||
continue;
|
||||
}
|
||||
if (!groupedSegments.has(parsed.prefix)) {
|
||||
groupedSegments.set(parsed.prefix, new Set());
|
||||
}
|
||||
groupedSegments.get(parsed.prefix).add(parsed.segment);
|
||||
}
|
||||
|
||||
for (const [prefix, segments] of groupedSegments) {
|
||||
const uniqueSegments = [...new Set([...segments].filter(Boolean))];
|
||||
if (uniqueSegments.length === 0) continue;
|
||||
|
||||
if (uniqueSegments.length === 1) {
|
||||
compactedModuleLabels.add(formatModuleLabel(prefix, uniqueSegments[0]));
|
||||
} else {
|
||||
forcePathPrefixes.add(prefix);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
moduleLabels: compactedModuleLabels,
|
||||
forcePathPrefixes,
|
||||
};
|
||||
}
|
||||
|
||||
function colorForLabel(label) {
|
||||
if (staticLabelColors[label]) return staticLabelColors[label];
|
||||
const matchedPrefix = Object.keys(modulePrefixColors).find((prefix) => label.startsWith(prefix));
|
||||
if (matchedPrefix) return modulePrefixColors[matchedPrefix];
|
||||
return "BFDADC";
|
||||
}
|
||||
|
||||
function descriptionForLabel(label) {
|
||||
if (staticLabelDescriptions[label]) return staticLabelDescriptions[label];
|
||||
|
||||
const parsed = parseModuleLabel(label);
|
||||
if (parsed) {
|
||||
if (parsed.segment === "core") {
|
||||
return `Auto module: ${parsed.prefix} core files changed.`;
|
||||
}
|
||||
return `Auto module: ${parsed.prefix}/${parsed.segment} changed.`;
|
||||
}
|
||||
|
||||
return "Auto-managed label.";
|
||||
}
|
||||
|
||||
async function ensureLabel(name, existing = null) {
|
||||
const expectedColor = colorForLabel(name);
|
||||
const expectedDescription = descriptionForLabel(name);
|
||||
try {
|
||||
const current = existing || (await github.rest.issues.getLabel({ owner, repo, name })).data;
|
||||
const currentColor = (current.color || "").toUpperCase();
|
||||
const currentDescription = (current.description || "").trim();
|
||||
if (currentColor !== expectedColor || currentDescription !== expectedDescription) {
|
||||
await github.rest.issues.updateLabel({
|
||||
owner,
|
||||
repo,
|
||||
name,
|
||||
new_name: name,
|
||||
color: expectedColor,
|
||||
description: expectedDescription,
|
||||
});
|
||||
}
|
||||
} catch (error) {
|
||||
if (error.status !== 404) throw error;
|
||||
await github.rest.issues.createLabel({
|
||||
owner,
|
||||
repo,
|
||||
name,
|
||||
color: expectedColor,
|
||||
description: expectedDescription,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function isManagedLabel(label) {
|
||||
if (label === manualRiskOverrideLabel) return true;
|
||||
if (sizeLabels.includes(label) || computedRiskLabels.includes(label)) return true;
|
||||
if (managedPathLabelSet.has(label)) return true;
|
||||
if (contributorTierLabels.includes(label)) return true;
|
||||
if (managedModulePrefixes.some((prefix) => label.startsWith(prefix))) return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
async function ensureManagedRepoLabelsMetadata() {
|
||||
const repoLabels = await github.paginate(github.rest.issues.listLabelsForRepo, {
|
||||
owner,
|
||||
repo,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
for (const existingLabel of repoLabels) {
|
||||
const labelName = existingLabel.name || "";
|
||||
if (!isManagedLabel(labelName)) continue;
|
||||
await ensureLabel(labelName, existingLabel);
|
||||
}
|
||||
}
|
||||
|
||||
function selectContributorTier(mergedCount) {
|
||||
const matchedTier = contributorTierRules.find((rule) => mergedCount >= rule.minMergedPRs);
|
||||
return matchedTier ? matchedTier.label : null;
|
||||
}
|
||||
|
||||
if (context.eventName === "workflow_dispatch") {
|
||||
const mode = (context.payload.inputs?.mode || "audit").toLowerCase();
|
||||
const shouldRepair = mode === "repair";
|
||||
const repoLabels = await github.paginate(github.rest.issues.listLabelsForRepo, {
|
||||
owner,
|
||||
repo,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
let managedScanned = 0;
|
||||
const drifts = [];
|
||||
|
||||
for (const existingLabel of repoLabels) {
|
||||
const labelName = existingLabel.name || "";
|
||||
if (!isManagedLabel(labelName)) continue;
|
||||
managedScanned += 1;
|
||||
|
||||
const expectedColor = colorForLabel(labelName);
|
||||
const expectedDescription = descriptionForLabel(labelName);
|
||||
const currentColor = (existingLabel.color || "").toUpperCase();
|
||||
const currentDescription = (existingLabel.description || "").trim();
|
||||
if (currentColor !== expectedColor || currentDescription !== expectedDescription) {
|
||||
drifts.push({
|
||||
name: labelName,
|
||||
currentColor,
|
||||
expectedColor,
|
||||
currentDescription,
|
||||
expectedDescription,
|
||||
});
|
||||
if (shouldRepair) {
|
||||
await ensureLabel(labelName, existingLabel);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
core.summary
|
||||
.addHeading("Managed Label Governance", 2)
|
||||
.addRaw(`Mode: ${shouldRepair ? "repair" : "audit"}`)
|
||||
.addEOL()
|
||||
.addRaw(`Managed labels scanned: ${managedScanned}`)
|
||||
.addEOL()
|
||||
.addRaw(`Drifts found: ${drifts.length}`)
|
||||
.addEOL();
|
||||
|
||||
if (drifts.length > 0) {
|
||||
const sample = drifts.slice(0, 30).map((entry) => [
|
||||
entry.name,
|
||||
`${entry.currentColor} -> ${entry.expectedColor}`,
|
||||
`${entry.currentDescription || "(blank)"} -> ${entry.expectedDescription}`,
|
||||
]);
|
||||
core.summary.addTable([
|
||||
[{ data: "Label", header: true }, { data: "Color", header: true }, { data: "Description", header: true }],
|
||||
...sample,
|
||||
]);
|
||||
if (drifts.length > sample.length) {
|
||||
core.summary
|
||||
.addRaw(`Additional drifts not shown: ${drifts.length - sample.length}`)
|
||||
.addEOL();
|
||||
}
|
||||
}
|
||||
|
||||
await core.summary.write();
|
||||
|
||||
if (!shouldRepair && drifts.length > 0) {
|
||||
core.info(`Managed-label metadata drifts detected: ${drifts.length}. Re-run with mode=repair to auto-fix.`);
|
||||
} else if (shouldRepair) {
|
||||
core.info(`Managed-label metadata repair applied to ${drifts.length} labels.`);
|
||||
} else {
|
||||
core.info("No managed-label metadata drift detected.");
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
const files = await github.paginate(github.rest.pulls.listFiles, {
|
||||
owner,
|
||||
repo,
|
||||
pull_number: pr.number,
|
||||
per_page: 100,
|
||||
});
|
||||
|
||||
const detectedModuleLabels = new Set();
|
||||
for (const file of files) {
|
||||
const path = (file.filename || "").toLowerCase();
|
||||
for (const rule of moduleNamespaceRules) {
|
||||
if (!path.startsWith(rule.root)) continue;
|
||||
|
||||
const relative = path.slice(rule.root.length);
|
||||
if (!relative) continue;
|
||||
|
||||
const first = relative.split("/")[0];
|
||||
const firstStem = first.endsWith(".rs") ? first.slice(0, -3) : first;
|
||||
let segment = firstStem;
|
||||
|
||||
if (rule.coreEntries.has(first) || rule.coreEntries.has(firstStem)) {
|
||||
segment = "core";
|
||||
}
|
||||
|
||||
segment = normalizeLabelSegment(segment);
|
||||
if (!segment) continue;
|
||||
|
||||
detectedModuleLabels.add(formatModuleLabel(rule.prefix, segment));
|
||||
}
|
||||
}
|
||||
|
||||
const providerRelevantFiles = files.filter((file) => {
|
||||
const path = file.filename || "";
|
||||
return (
|
||||
path.startsWith("src/providers/") ||
|
||||
path.startsWith("src/integrations/") ||
|
||||
path.startsWith("src/onboard/") ||
|
||||
path.startsWith("src/config/")
|
||||
);
|
||||
});
|
||||
|
||||
if (providerRelevantFiles.length > 0) {
|
||||
const searchableText = [
|
||||
pr.title || "",
|
||||
pr.body || "",
|
||||
...providerRelevantFiles.map((file) => file.filename || ""),
|
||||
...providerRelevantFiles.map((file) => file.patch || ""),
|
||||
]
|
||||
.join("\n")
|
||||
.toLowerCase();
|
||||
|
||||
for (const keyword of providerKeywordHints) {
|
||||
if (containsKeyword(searchableText, keyword)) {
|
||||
detectedModuleLabels.add(formatModuleLabel("provider", keyword));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const channelRelevantFiles = files.filter((file) => {
|
||||
const path = file.filename || "";
|
||||
return (
|
||||
path.startsWith("src/channels/") ||
|
||||
path.startsWith("src/onboard/") ||
|
||||
path.startsWith("src/config/")
|
||||
);
|
||||
});
|
||||
|
||||
if (channelRelevantFiles.length > 0) {
|
||||
const searchableText = [
|
||||
pr.title || "",
|
||||
pr.body || "",
|
||||
...channelRelevantFiles.map((file) => file.filename || ""),
|
||||
...channelRelevantFiles.map((file) => file.patch || ""),
|
||||
]
|
||||
.join("\n")
|
||||
.toLowerCase();
|
||||
|
||||
for (const keyword of channelKeywordHints) {
|
||||
if (containsKeyword(searchableText, keyword)) {
|
||||
detectedModuleLabels.add(formatModuleLabel("channel", keyword));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const refinedModuleLabels = refineModuleLabels(detectedModuleLabels);
|
||||
const compactedModuleState = compactModuleLabels(refinedModuleLabels);
|
||||
const selectedModuleLabels = compactedModuleState.moduleLabels;
|
||||
const forcePathPrefixes = compactedModuleState.forcePathPrefixes;
|
||||
const modulePrefixesWithLabels = new Set(
|
||||
[...selectedModuleLabels]
|
||||
.map((label) => parseModuleLabel(label)?.prefix)
|
||||
.filter(Boolean)
|
||||
);
|
||||
|
||||
const { data: currentLabels } = await github.rest.issues.listLabelsOnIssue({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
});
|
||||
const currentLabelNames = currentLabels.map((label) => label.name);
|
||||
const currentPathLabels = currentLabelNames.filter((label) => managedPathLabelSet.has(label));
|
||||
const candidatePathLabels = new Set([...currentPathLabels, ...forcePathPrefixes]);
|
||||
|
||||
const dedupedPathLabels = [...candidatePathLabels].filter((label) => {
|
||||
if (label === "core") return true;
|
||||
if (forcePathPrefixes.has(label)) return true;
|
||||
return !modulePrefixesWithLabels.has(label);
|
||||
});
|
||||
|
||||
const excludedLockfiles = new Set(["Cargo.lock"]);
|
||||
const changedLines = files.reduce((total, file) => {
|
||||
const path = file.filename || "";
|
||||
if (isDocsLike(path) || excludedLockfiles.has(path)) {
|
||||
return total;
|
||||
}
|
||||
return total + (file.additions || 0) + (file.deletions || 0);
|
||||
}, 0);
|
||||
|
||||
let sizeLabel = "size: XL";
|
||||
if (changedLines <= 80) sizeLabel = "size: XS";
|
||||
else if (changedLines <= 250) sizeLabel = "size: S";
|
||||
else if (changedLines <= 500) sizeLabel = "size: M";
|
||||
else if (changedLines <= 1000) sizeLabel = "size: L";
|
||||
|
||||
const hasHighRiskPath = files.some((file) => {
|
||||
const path = file.filename || "";
|
||||
return (
|
||||
path.startsWith("src/security/") ||
|
||||
path.startsWith("src/runtime/") ||
|
||||
path.startsWith("src/gateway/") ||
|
||||
path.startsWith("src/tools/") ||
|
||||
path.startsWith(".github/workflows/")
|
||||
);
|
||||
});
|
||||
|
||||
const hasMediumRiskPath = files.some((file) => {
|
||||
const path = file.filename || "";
|
||||
return (
|
||||
path.startsWith("src/") ||
|
||||
path === "Cargo.toml" ||
|
||||
path === "Cargo.lock" ||
|
||||
path === "deny.toml" ||
|
||||
path.startsWith(".githooks/")
|
||||
);
|
||||
});
|
||||
|
||||
let riskLabel = "risk: low";
|
||||
if (hasHighRiskPath) {
|
||||
riskLabel = "risk: high";
|
||||
} else if (hasMediumRiskPath) {
|
||||
riskLabel = "risk: medium";
|
||||
}
|
||||
|
||||
await ensureManagedRepoLabelsMetadata();
|
||||
|
||||
const labelsToEnsure = new Set([
|
||||
...sizeLabels,
|
||||
...computedRiskLabels,
|
||||
manualRiskOverrideLabel,
|
||||
...managedPathLabels,
|
||||
...contributorTierLabels,
|
||||
...selectedModuleLabels,
|
||||
]);
|
||||
|
||||
for (const label of labelsToEnsure) {
|
||||
await ensureLabel(label);
|
||||
}
|
||||
|
||||
let contributorTierLabel = null;
|
||||
const authorLogin = pr.user?.login;
|
||||
if (authorLogin && pr.user?.type !== "Bot") {
|
||||
try {
|
||||
const { data: mergedSearch } = await github.rest.search.issuesAndPullRequests({
|
||||
q: `repo:${owner}/${repo} is:pr is:merged author:${authorLogin}`,
|
||||
per_page: 1,
|
||||
});
|
||||
const mergedCount = mergedSearch.total_count || 0;
|
||||
contributorTierLabel = selectContributorTier(mergedCount);
|
||||
} catch (error) {
|
||||
core.warning(`failed to compute contributor tier label: ${error.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
const hasManualRiskOverride = currentLabelNames.includes(manualRiskOverrideLabel);
|
||||
const keepNonManagedLabels = currentLabelNames.filter((label) => {
|
||||
if (label === manualRiskOverrideLabel) return true;
|
||||
if (contributorTierLabels.includes(label)) return false;
|
||||
if (sizeLabels.includes(label) || computedRiskLabels.includes(label)) return false;
|
||||
if (managedPathLabelSet.has(label)) return false;
|
||||
if (managedModulePrefixes.some((prefix) => label.startsWith(prefix))) return false;
|
||||
return true;
|
||||
});
|
||||
|
||||
const manualRiskSelection =
|
||||
currentLabelNames.find((label) => computedRiskLabels.includes(label)) || riskLabel;
|
||||
|
||||
const moduleLabelList = sortModuleLabels([...selectedModuleLabels]);
|
||||
const contributorLabelList = contributorTierLabel ? [contributorTierLabel] : [];
|
||||
const selectedRiskLabels = hasManualRiskOverride
|
||||
? sortByPriority([manualRiskSelection, manualRiskOverrideLabel], riskPriorityIndex)
|
||||
: sortByPriority([riskLabel], riskPriorityIndex);
|
||||
const selectedSizeLabels = sortByPriority([sizeLabel], sizePriorityIndex);
|
||||
const sortedContributorLabels = sortByPriority(contributorLabelList, contributorPriorityIndex);
|
||||
const sortedPathLabels = sortByPriority(dedupedPathLabels, pathLabelPriorityIndex);
|
||||
const sortedKeepNonManagedLabels = [...new Set(keepNonManagedLabels)].sort((left, right) =>
|
||||
left.localeCompare(right)
|
||||
);
|
||||
|
||||
const nextLabels = [
|
||||
...new Set([
|
||||
...selectedRiskLabels,
|
||||
...selectedSizeLabels,
|
||||
...sortedContributorLabels,
|
||||
...moduleLabelList,
|
||||
...sortedPathLabels,
|
||||
...sortedKeepNonManagedLabels,
|
||||
]),
|
||||
];
|
||||
|
||||
await github.rest.issues.setLabels({
|
||||
owner,
|
||||
repo,
|
||||
issue_number: pr.number,
|
||||
labels: nextLabels,
|
||||
});
|
||||
};
|
||||
@@ -1,57 +0,0 @@
|
||||
// Extracted from test-benchmarks.yml step: Post benchmark summary on PR
|
||||
|
||||
module.exports = async ({ github, context, core }) => {
|
||||
const fs = require('fs');
|
||||
const output = fs.readFileSync('benchmark_output.txt', 'utf8');
|
||||
|
||||
// Extract Criterion result lines
|
||||
const lines = output.split('\n').filter(l =>
|
||||
l.includes('time:') || l.includes('change:') || l.includes('Performance')
|
||||
);
|
||||
|
||||
if (lines.length === 0) {
|
||||
core.info('No benchmark results to post.');
|
||||
return;
|
||||
}
|
||||
|
||||
const body = [
|
||||
'## 📊 Benchmark Results',
|
||||
'',
|
||||
'```',
|
||||
lines.join('\n'),
|
||||
'```',
|
||||
'',
|
||||
'<details><summary>Full output</summary>',
|
||||
'',
|
||||
'```',
|
||||
output.substring(0, 60000),
|
||||
'```',
|
||||
'</details>',
|
||||
].join('\n');
|
||||
|
||||
// Find and update or create comment
|
||||
const { data: comments } = await github.rest.issues.listComments({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.payload.pull_request.number,
|
||||
});
|
||||
|
||||
const marker = '## 📊 Benchmark Results';
|
||||
const existing = comments.find(c => c.body && c.body.startsWith(marker));
|
||||
|
||||
if (existing) {
|
||||
await github.rest.issues.updateComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: existing.id,
|
||||
body,
|
||||
});
|
||||
} else {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.payload.pull_request.number,
|
||||
body,
|
||||
});
|
||||
}
|
||||
};
|
||||
@@ -1,57 +0,0 @@
|
||||
name: Sec Audit
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [dev, main]
|
||||
paths:
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- "src/**"
|
||||
- "crates/**"
|
||||
- "deny.toml"
|
||||
pull_request:
|
||||
branches: [dev, main]
|
||||
paths:
|
||||
- "Cargo.toml"
|
||||
- "Cargo.lock"
|
||||
- "src/**"
|
||||
- "crates/**"
|
||||
- "deny.toml"
|
||||
schedule:
|
||||
- cron: "0 6 * * 1" # Weekly on Monday 6am UTC
|
||||
|
||||
concurrency:
|
||||
group: security-${{ github.event.pull_request.number || github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
security-events: write
|
||||
actions: read
|
||||
checks: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
audit:
|
||||
name: Security Audit
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: rustsec/audit-check@69366f33c96575abad1ee0dba8212993eecbe998 # v2.0.0
|
||||
with:
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
deny:
|
||||
name: License & Supply Chain
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: EmbarkStudios/cargo-deny-action@3fd3802e88374d3fe9159b834c7714ec57d6c979 # v2
|
||||
with:
|
||||
command: check advisories licenses sources
|
||||
@@ -1,39 +0,0 @@
|
||||
name: Sec CodeQL
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 6 * * 1" # Weekly Monday 6am UTC
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: codeql-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
security-events: write
|
||||
actions: read
|
||||
|
||||
jobs:
|
||||
codeql:
|
||||
name: CodeQL Analysis
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4
|
||||
with:
|
||||
languages: rust
|
||||
config-file: ./.github/codeql/codeql-config.yml
|
||||
|
||||
- name: Set up Rust
|
||||
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
|
||||
- name: Build
|
||||
run: cargo build --workspace --all-targets
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@89a39a4e59826350b863aa6b6252a07ad50cf83e # v4
|
||||
@@ -1,185 +0,0 @@
|
||||
name: Sec Vorpal Reviewdog
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
scan_scope:
|
||||
description: "File selection mode when source_path is empty"
|
||||
required: true
|
||||
type: choice
|
||||
default: changed
|
||||
options:
|
||||
- changed
|
||||
- all
|
||||
base_ref:
|
||||
description: "Base branch/ref for changed diff mode"
|
||||
required: true
|
||||
type: string
|
||||
default: main
|
||||
source_path:
|
||||
description: "Optional comma-separated file paths to scan (overrides scan_scope)"
|
||||
required: false
|
||||
type: string
|
||||
include_tests:
|
||||
description: "Include test/fixture files in scan selection"
|
||||
required: true
|
||||
type: choice
|
||||
default: "false"
|
||||
options:
|
||||
- "false"
|
||||
- "true"
|
||||
folders_to_ignore:
|
||||
description: "Optional comma-separated path prefixes to ignore"
|
||||
required: false
|
||||
type: string
|
||||
default: target,node_modules,web/dist,.venv,venv
|
||||
reporter:
|
||||
description: "Reviewdog reporter mode"
|
||||
required: true
|
||||
type: choice
|
||||
default: github-pr-check
|
||||
options:
|
||||
- github-pr-check
|
||||
- github-pr-review
|
||||
filter_mode:
|
||||
description: "Reviewdog filter mode"
|
||||
required: true
|
||||
type: choice
|
||||
default: file
|
||||
options:
|
||||
- added
|
||||
- diff_context
|
||||
- file
|
||||
- nofilter
|
||||
level:
|
||||
description: "Reviewdog severity level"
|
||||
required: true
|
||||
type: choice
|
||||
default: error
|
||||
options:
|
||||
- info
|
||||
- warning
|
||||
- error
|
||||
fail_on_error:
|
||||
description: "Fail workflow when Vorpal reports findings"
|
||||
required: true
|
||||
type: choice
|
||||
default: "false"
|
||||
options:
|
||||
- "false"
|
||||
- "true"
|
||||
reviewdog_flags:
|
||||
description: "Optional extra reviewdog flags"
|
||||
required: false
|
||||
type: string
|
||||
|
||||
concurrency:
|
||||
group: sec-vorpal-reviewdog-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
checks: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
vorpal:
|
||||
name: Vorpal Reviewdog Scan
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 20
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Resolve source paths
|
||||
id: sources
|
||||
shell: bash
|
||||
env:
|
||||
INPUT_SOURCE_PATH: ${{ inputs.source_path }}
|
||||
INPUT_SCAN_SCOPE: ${{ inputs.scan_scope }}
|
||||
INPUT_BASE_REF: ${{ inputs.base_ref }}
|
||||
INPUT_INCLUDE_TESTS: ${{ inputs.include_tests }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
strip_space() {
|
||||
local value="$1"
|
||||
value="${value//$'\n'/}"
|
||||
value="${value//$'\r'/}"
|
||||
value="${value// /}"
|
||||
echo "$value"
|
||||
}
|
||||
|
||||
source_override="$(strip_space "${INPUT_SOURCE_PATH}")"
|
||||
if [ -n "${source_override}" ]; then
|
||||
normalized="$(echo "${INPUT_SOURCE_PATH}" | tr '\n' ',' | sed -E 's/[[:space:]]+//g; s/,+/,/g; s/^,|,$//g')"
|
||||
if [ -n "${normalized}" ]; then
|
||||
{
|
||||
echo "scan=true"
|
||||
echo "source_path=${normalized}"
|
||||
echo "selection=manual"
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
include_ext='\.(py|js|jsx|ts|tsx)$'
|
||||
exclude_paths='^(target/|node_modules/|web/node_modules/|dist/|web/dist/|\.venv/|venv/)'
|
||||
exclude_tests='(^|/)(test|tests|__tests__|fixtures|mocks|examples)/|(^|/)test_helpers/|(_test\.py$)|(^|/)test_.*\.py$|(\.spec\.(ts|tsx|js|jsx)$)|(\.test\.(ts|tsx|js|jsx)$)'
|
||||
|
||||
if [ "${INPUT_SCAN_SCOPE}" = "all" ]; then
|
||||
candidate_files="$(git ls-files)"
|
||||
else
|
||||
base_ref="${INPUT_BASE_REF#refs/heads/}"
|
||||
base_ref="${base_ref#origin/}"
|
||||
if git fetch --no-tags --depth=1 origin "${base_ref}" >/dev/null 2>&1; then
|
||||
if merge_base="$(git merge-base HEAD "origin/${base_ref}" 2>/dev/null)"; then
|
||||
candidate_files="$(git diff --name-only --diff-filter=ACMR "${merge_base}"...HEAD)"
|
||||
else
|
||||
echo "Unable to resolve merge-base for origin/${base_ref}; falling back to tracked files."
|
||||
candidate_files="$(git ls-files)"
|
||||
fi
|
||||
else
|
||||
echo "Unable to fetch origin/${base_ref}; falling back to tracked files."
|
||||
candidate_files="$(git ls-files)"
|
||||
fi
|
||||
fi
|
||||
|
||||
source_files="$(printf '%s\n' "${candidate_files}" | sed '/^$/d' | grep -E "${include_ext}" | grep -Ev "${exclude_paths}" || true)"
|
||||
if [ "${INPUT_INCLUDE_TESTS}" != "true" ] && [ -n "${source_files}" ]; then
|
||||
source_files="$(printf '%s\n' "${source_files}" | grep -Ev "${exclude_tests}" || true)"
|
||||
fi
|
||||
if [ -z "${source_files}" ]; then
|
||||
{
|
||||
echo "scan=false"
|
||||
echo "source_path="
|
||||
echo "selection=none"
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
source_path="$(printf '%s\n' "${source_files}" | paste -sd, -)"
|
||||
{
|
||||
echo "scan=true"
|
||||
echo "source_path=${source_path}"
|
||||
echo "selection=auto-${INPUT_SCAN_SCOPE}"
|
||||
} >> "${GITHUB_OUTPUT}"
|
||||
|
||||
- name: No supported files to scan
|
||||
if: steps.sources.outputs.scan != 'true'
|
||||
shell: bash
|
||||
run: |
|
||||
echo "No supported files selected for Vorpal scan (extensions: .py .js .jsx .ts .tsx)."
|
||||
|
||||
- name: Run Vorpal with reviewdog
|
||||
if: steps.sources.outputs.scan == 'true'
|
||||
uses: Checkmarx/vorpal-reviewdog-github-action@8cc292f337a2f1dea581b4f4bd73852e7becb50d # v1.2.0
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
source_path: ${{ steps.sources.outputs.source_path }}
|
||||
folders_to_ignore: ${{ inputs.folders_to_ignore }}
|
||||
reporter: ${{ inputs.reporter }}
|
||||
filter_mode: ${{ inputs.filter_mode }}
|
||||
level: ${{ inputs.level }}
|
||||
fail_on_error: ${{ inputs.fail_on_error }}
|
||||
reviewdog_flags: ${{ inputs.reviewdog_flags }}
|
||||
@@ -1,116 +0,0 @@
|
||||
name: Sync Contributors
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
# Run every Sunday at 00:00 UTC
|
||||
- cron: '0 0 * * 0'
|
||||
|
||||
concurrency:
|
||||
group: update-notice-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
update-notice:
|
||||
name: Update NOTICE with new contributors
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Fetch contributors
|
||||
id: contributors
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
# Fetch all contributors (excluding bots)
|
||||
gh api \
|
||||
--paginate \
|
||||
"repos/${{ github.repository }}/contributors" \
|
||||
--jq '.[] | select(.type != "Bot") | .login' > /tmp/contributors_raw.txt
|
||||
|
||||
# Sort alphabetically and filter
|
||||
sort -f < /tmp/contributors_raw.txt > contributors.txt
|
||||
|
||||
# Count contributors
|
||||
count=$(wc -l < contributors.txt | tr -d ' ')
|
||||
echo "count=$count" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Generate new NOTICE file
|
||||
run: |
|
||||
cat > NOTICE << 'EOF'
|
||||
ZeroClaw
|
||||
Copyright 2025 ZeroClaw Labs
|
||||
|
||||
This product includes software developed at ZeroClaw Labs (https://github.com/zeroclaw-labs).
|
||||
|
||||
Contributors
|
||||
============
|
||||
|
||||
The following individuals have contributed to ZeroClaw:
|
||||
|
||||
EOF
|
||||
|
||||
# Append contributors in alphabetical order
|
||||
sed 's/^/- /' contributors.txt >> NOTICE
|
||||
|
||||
# Add third-party dependencies section
|
||||
cat >> NOTICE << 'EOF'
|
||||
|
||||
|
||||
Third-Party Dependencies
|
||||
=========================
|
||||
|
||||
This project uses the following third-party libraries and components,
|
||||
each licensed under their respective terms:
|
||||
|
||||
See Cargo.lock for a complete list of dependencies and their licenses.
|
||||
EOF
|
||||
|
||||
- name: Check if NOTICE changed
|
||||
id: check_diff
|
||||
run: |
|
||||
if git diff --quiet NOTICE; then
|
||||
echo "changed=false" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "changed=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Create Pull Request
|
||||
if: steps.check_diff.outputs.changed == 'true'
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
COUNT: ${{ steps.contributors.outputs.count }}
|
||||
run: |
|
||||
branch_name="auto/update-notice-$(date +%Y%m%d)"
|
||||
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
git checkout -b "$branch_name"
|
||||
git add NOTICE
|
||||
git commit -m "chore(notice): update contributor list"
|
||||
git push origin "$branch_name"
|
||||
|
||||
gh pr create \
|
||||
--title "chore(notice): update contributor list" \
|
||||
--body "Auto-generated update to NOTICE file with $COUNT contributors." \
|
||||
--label "chore" \
|
||||
--label "docs" \
|
||||
--draft || true
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
echo "## NOTICE Update Results" >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "" >> "$GITHUB_STEP_SUMMARY"
|
||||
if [ "${{ steps.check_diff.outputs.changed }}" = "true" ]; then
|
||||
echo "✅ PR created to update NOTICE" >> "$GITHUB_STEP_SUMMARY"
|
||||
else
|
||||
echo "✓ NOTICE file is up to date" >> "$GITHUB_STEP_SUMMARY"
|
||||
fi
|
||||
echo "" >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "**Contributors:** ${{ steps.contributors.outputs.count }}" >> "$GITHUB_STEP_SUMMARY"
|
||||
@@ -1,50 +0,0 @@
|
||||
name: Test Benchmarks
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 3 * * 1" # Weekly Monday 3am UTC
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: bench-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
benchmarks:
|
||||
name: Criterion Benchmarks
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
|
||||
- name: Run benchmarks
|
||||
run: cargo bench --locked 2>&1 | tee benchmark_output.txt
|
||||
|
||||
- name: Upload benchmark results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
with:
|
||||
name: benchmark-results
|
||||
path: |
|
||||
target/criterion/
|
||||
benchmark_output.txt
|
||||
retention-days: 7
|
||||
|
||||
- name: Post benchmark summary on PR
|
||||
if: github.event_name == 'pull_request'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const script = require('./.github/workflows/scripts/test_benchmarks_pr_comment.js');
|
||||
await script({ github, context, core });
|
||||
@@ -1,30 +0,0 @@
|
||||
name: Test E2E
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [dev, main]
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: e2e-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
integration-tests:
|
||||
name: Integration / E2E Tests
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: 1.92.0
|
||||
- uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
- name: Run integration / E2E tests
|
||||
run: cargo test --test agent_e2e --locked --verbose
|
||||
@@ -1,72 +0,0 @@
|
||||
name: Test Fuzz
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 2 * * 0" # Weekly Sunday 2am UTC
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
fuzz_seconds:
|
||||
description: "Seconds to run each fuzz target"
|
||||
required: false
|
||||
default: "300"
|
||||
|
||||
concurrency:
|
||||
group: fuzz-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
fuzz:
|
||||
name: Fuzz (${{ matrix.target }})
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
target:
|
||||
- fuzz_config_parse
|
||||
- fuzz_tool_params
|
||||
steps:
|
||||
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: nightly
|
||||
components: llvm-tools-preview
|
||||
|
||||
- name: Install cargo-fuzz
|
||||
run: cargo install cargo-fuzz --locked
|
||||
|
||||
- name: Run fuzz target
|
||||
run: |
|
||||
SECONDS="${{ github.event.inputs.fuzz_seconds || '300' }}"
|
||||
echo "Fuzzing ${{ matrix.target }} for ${SECONDS}s"
|
||||
cargo +nightly fuzz run ${{ matrix.target }} -- \
|
||||
-max_total_time="${SECONDS}" \
|
||||
-max_len=4096
|
||||
continue-on-error: true
|
||||
id: fuzz
|
||||
|
||||
- name: Upload crash artifacts
|
||||
if: failure() || steps.fuzz.outcome == 'failure'
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: fuzz-crashes-${{ matrix.target }}
|
||||
path: fuzz/artifacts/${{ matrix.target }}/
|
||||
retention-days: 30
|
||||
if-no-files-found: ignore
|
||||
|
||||
- name: Report fuzz results
|
||||
run: |
|
||||
echo "### Fuzz: ${{ matrix.target }}" >> "$GITHUB_STEP_SUMMARY"
|
||||
if [ "${{ steps.fuzz.outcome }}" = "failure" ]; then
|
||||
echo "- :x: Crashes found — see artifacts" >> "$GITHUB_STEP_SUMMARY"
|
||||
else
|
||||
echo "- :white_check_mark: No crashes found" >> "$GITHUB_STEP_SUMMARY"
|
||||
fi
|
||||
@@ -1,62 +0,0 @@
|
||||
name: Test Rust Build
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
run_command:
|
||||
description: "Shell command(s) to execute."
|
||||
required: true
|
||||
type: string
|
||||
timeout_minutes:
|
||||
description: "Job timeout in minutes."
|
||||
required: false
|
||||
default: 20
|
||||
type: number
|
||||
toolchain:
|
||||
description: "Rust toolchain channel/version."
|
||||
required: false
|
||||
default: "stable"
|
||||
type: string
|
||||
components:
|
||||
description: "Optional rustup components."
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
targets:
|
||||
description: "Optional rustup targets."
|
||||
required: false
|
||||
default: ""
|
||||
type: string
|
||||
use_cache:
|
||||
description: "Whether to enable rust-cache."
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
run:
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: ${{ inputs.timeout_minutes }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Setup Rust toolchain
|
||||
uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
|
||||
with:
|
||||
toolchain: ${{ inputs.toolchain }}
|
||||
components: ${{ inputs.components }}
|
||||
targets: ${{ inputs.targets }}
|
||||
|
||||
- name: Restore Rust cache
|
||||
if: inputs.use_cache
|
||||
uses: useblacksmith/rust-cache@f53e7f127245d2a269b3d90879ccf259876842d5 # v3
|
||||
|
||||
- name: Run command
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
${{ inputs.run_command }}
|
||||
@@ -1,64 +0,0 @@
|
||||
name: Workflow Sanity
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- ".github/workflows/**"
|
||||
- ".github/*.yml"
|
||||
- ".github/*.yaml"
|
||||
push:
|
||||
paths:
|
||||
- ".github/workflows/**"
|
||||
- ".github/*.yml"
|
||||
- ".github/*.yaml"
|
||||
|
||||
concurrency:
|
||||
group: workflow-sanity-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
no-tabs:
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Fail on tabs in workflow files
|
||||
shell: bash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
python - <<'PY'
|
||||
from __future__ import annotations
|
||||
|
||||
import pathlib
|
||||
import sys
|
||||
|
||||
root = pathlib.Path(".github/workflows")
|
||||
bad: list[str] = []
|
||||
for path in sorted(root.rglob("*.yml")):
|
||||
if b"\t" in path.read_bytes():
|
||||
bad.append(str(path))
|
||||
for path in sorted(root.rglob("*.yaml")):
|
||||
if b"\t" in path.read_bytes():
|
||||
bad.append(str(path))
|
||||
|
||||
if bad:
|
||||
print("Tabs found in workflow file(s):")
|
||||
for path in bad:
|
||||
print(f"- {path}")
|
||||
sys.exit(1)
|
||||
PY
|
||||
|
||||
actionlint:
|
||||
runs-on: blacksmith-2vcpu-ubuntu-2404
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
|
||||
|
||||
- name: Lint GitHub workflows
|
||||
uses: rhysd/actionlint@393031adb9afb225ee52ae2ccd7a5af5525e03e8 # v1.7.11
|
||||
+12
@@ -1,5 +1,6 @@
|
||||
/target
|
||||
firmware/*/target
|
||||
web/dist/
|
||||
*.db
|
||||
*.db-journal
|
||||
.DS_Store
|
||||
@@ -29,3 +30,14 @@ venv/
|
||||
*.pem
|
||||
credentials.json
|
||||
.worktrees/
|
||||
.zeroclaw/*
|
||||
|
||||
# Skill eval workspaces (test outputs, transcripts, grading)
|
||||
.claude/skills/*-workspace/
|
||||
|
||||
# Local state backups
|
||||
.local-state-backups/
|
||||
*.local-state-backup/
|
||||
|
||||
# Coverage artifacts
|
||||
lcov.info
|
||||
|
||||
Vendored
+14
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"recommendations": [
|
||||
"rust-lang.rust-analyzer",
|
||||
"vadimcn.vscode-lldb",
|
||||
"serayuzgur.crates",
|
||||
"bungcip.better-toml",
|
||||
"usernamehw.errorlens",
|
||||
"eamodio.gitlens",
|
||||
"tamasfe.even-better-toml",
|
||||
"dbaeumer.vscode-eslint",
|
||||
"oderwat.indent-rainbow",
|
||||
"ryanluker.vscode-coverage-gutters"
|
||||
]
|
||||
}
|
||||
Vendored
+73
@@ -0,0 +1,73 @@
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"inputs": [
|
||||
{
|
||||
"id": "testName",
|
||||
"description": "Exact test name to debug (e.g. tests::my_test)",
|
||||
"type": "promptString",
|
||||
"default": ""
|
||||
}
|
||||
],
|
||||
"configurations": [
|
||||
// ── Runtime ───────────────────────────────────────────
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Agent",
|
||||
"program": "${workspaceFolder}/target/debug/zeroclaw",
|
||||
"args": ["agent"],
|
||||
"cwd": "${workspaceFolder}",
|
||||
"preLaunchTask": "Build: Debug"
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Gateway",
|
||||
"program": "${workspaceFolder}/target/debug/zeroclaw",
|
||||
"args": ["gateway"],
|
||||
"cwd": "${workspaceFolder}",
|
||||
"preLaunchTask": "Build: Debug"
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Daemon",
|
||||
"program": "${workspaceFolder}/target/debug/zeroclaw",
|
||||
"args": ["daemon"],
|
||||
"cwd": "${workspaceFolder}",
|
||||
"preLaunchTask": "Build: Debug"
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Status",
|
||||
"program": "${workspaceFolder}/target/debug/zeroclaw",
|
||||
"args": ["status"],
|
||||
"cwd": "${workspaceFolder}",
|
||||
"preLaunchTask": "Build: Debug"
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Onboard",
|
||||
"program": "${workspaceFolder}/target/debug/zeroclaw",
|
||||
"args": ["onboard"],
|
||||
"cwd": "${workspaceFolder}",
|
||||
"preLaunchTask": "Build: Debug"
|
||||
},
|
||||
// ── Test ──────────────────────────────────────────────
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug: Test (by name)",
|
||||
"cargo": {
|
||||
"args": ["test", "--no-run", "--lib", "--"],
|
||||
"filter": {
|
||||
"kind": "lib"
|
||||
}
|
||||
},
|
||||
"args": ["--exact", "${input:testName}", "--nocapture"],
|
||||
"cwd": "${workspaceFolder}"
|
||||
}
|
||||
]
|
||||
}
|
||||
Vendored
+22
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"git.autofetch": true,
|
||||
"git.autofetchPeriod": 90,
|
||||
"search.exclude": {
|
||||
"**/target": true
|
||||
},
|
||||
"files.watcherExclude": {
|
||||
"**/target/**": true
|
||||
},
|
||||
"[rust]": {
|
||||
"editor.defaultFormatter": "rust-lang.rust-analyzer"
|
||||
},
|
||||
"editor.formatOnSave": true,
|
||||
"editor.formatOnPaste": true,
|
||||
"files.autoSave": "afterDelay",
|
||||
"files.autoSaveDelay": 1000,
|
||||
"rust-analyzer.check.command": "clippy",
|
||||
"rust-analyzer.check.extraArgs": ["--all-targets", "--", "-D", "warnings"],
|
||||
"window.title": "${activeRepositoryBranchName}",
|
||||
"coverage-gutters.coverageFileNames": ["lcov.info"],
|
||||
"git.postCommitCommand": "push"
|
||||
}
|
||||
Vendored
+133
@@ -0,0 +1,133 @@
|
||||
{
|
||||
"version": "2.0.0",
|
||||
"inputs": [
|
||||
{
|
||||
"id": "testFilter",
|
||||
"description": "Test name or filter pattern",
|
||||
"type": "promptString",
|
||||
"default": ""
|
||||
}
|
||||
],
|
||||
"tasks": [
|
||||
// ── Build ──────────────────────────────────────────────
|
||||
{
|
||||
"label": "Build: Debug",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["build"],
|
||||
"group": {
|
||||
"kind": "build",
|
||||
"isDefault": true
|
||||
},
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
{
|
||||
"label": "Build: Release",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["build", "--release"],
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
{
|
||||
"label": "Build: Check (fast)",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["check", "--all-targets"],
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
// ── Lint ───────────────────────────────────────────────
|
||||
{
|
||||
"label": "Lint: Clippy",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["clippy", "--all-targets", "--", "-D", "warnings"],
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
{
|
||||
"label": "Lint: Format Check",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["fmt", "--all", "--", "--check"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "Lint: Format Fix",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["fmt", "--all"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
// ── Test ──────────────────────────────────────────────
|
||||
{
|
||||
"label": "Test: All",
|
||||
"type": "shell",
|
||||
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run",
|
||||
"group": {
|
||||
"kind": "test",
|
||||
"isDefault": true
|
||||
},
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
{
|
||||
"label": "Test: Filtered",
|
||||
"type": "shell",
|
||||
"command": "cargo nextest --version >/dev/null 2>&1 || cargo install cargo-nextest && cargo nextest run -E 'test(${input:testFilter})'",
|
||||
"problemMatcher": ["$rustc"]
|
||||
},
|
||||
{
|
||||
"label": "Test: Coverage Report",
|
||||
"type": "shell",
|
||||
"command": "cargo llvm-cov --version >/dev/null 2>&1 || cargo install cargo-llvm-cov && cargo llvm-cov --lcov --output-path lcov.info",
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "Test: Benchmarks",
|
||||
"type": "shell",
|
||||
"command": "cargo",
|
||||
"args": ["bench"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
// ── Security ──────────────────────────────────────────
|
||||
{
|
||||
"label": "Security: Audit",
|
||||
"type": "shell",
|
||||
"command": "cargo audit --version >/dev/null 2>&1 || cargo install cargo-audit && cargo audit",
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "Security: Deny (licenses + sources)",
|
||||
"type": "shell",
|
||||
"command": "cargo deny --version >/dev/null 2>&1 || cargo install cargo-deny && cargo deny check licenses sources",
|
||||
"problemMatcher": []
|
||||
},
|
||||
// ── CI (Docker) ───────────────────────────────────────
|
||||
{
|
||||
"label": "CI: All (Docker)",
|
||||
"type": "shell",
|
||||
"command": "./dev/ci.sh",
|
||||
"args": ["all"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "CI: Lint (Docker)",
|
||||
"type": "shell",
|
||||
"command": "./dev/ci.sh",
|
||||
"args": ["lint"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "CI: Test (Docker)",
|
||||
"type": "shell",
|
||||
"command": "./dev/ci.sh",
|
||||
"args": ["test"],
|
||||
"problemMatcher": []
|
||||
},
|
||||
{
|
||||
"label": "CI: Security (Docker)",
|
||||
"type": "shell",
|
||||
"command": "./dev/ci.sh",
|
||||
"args": ["security"],
|
||||
"problemMatcher": []
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,484 +0,0 @@
|
||||
# AGENTS.md — ZeroClaw Agent Engineering Protocol
|
||||
|
||||
This file defines the default working protocol for coding agents in this repository.
|
||||
Scope: entire repository.
|
||||
|
||||
## 1) Project Snapshot (Read First)
|
||||
|
||||
ZeroClaw is a Rust-first autonomous agent runtime optimized for:
|
||||
|
||||
- high performance
|
||||
- high efficiency
|
||||
- high stability
|
||||
- high extensibility
|
||||
- high sustainability
|
||||
- high security
|
||||
|
||||
Core architecture is trait-driven and modular. Most extension work should be done by implementing traits and registering in factory modules.
|
||||
|
||||
Key extension points:
|
||||
|
||||
- `src/providers/traits.rs` (`Provider`)
|
||||
- `src/channels/traits.rs` (`Channel`)
|
||||
- `src/tools/traits.rs` (`Tool`)
|
||||
- `src/memory/traits.rs` (`Memory`)
|
||||
- `src/observability/traits.rs` (`Observer`)
|
||||
- `src/runtime/traits.rs` (`RuntimeAdapter`)
|
||||
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
|
||||
|
||||
## 2) Deep Architecture Observations (Why This Protocol Exists)
|
||||
|
||||
These codebase realities should drive every design decision:
|
||||
|
||||
1. **Trait + factory architecture is the stability backbone**
|
||||
- Extension points are intentionally explicit and swappable.
|
||||
- Most features should be added via trait implementation + factory registration, not cross-cutting rewrites.
|
||||
2. **Security-critical surfaces are first-class and internet-adjacent**
|
||||
- `src/gateway/`, `src/security/`, `src/tools/`, `src/runtime/` carry high blast radius.
|
||||
- Defaults already lean secure-by-default (pairing, bind safety, limits, secret handling); keep it that way.
|
||||
3. **Performance and binary size are product goals, not nice-to-have**
|
||||
- `Cargo.toml` release profile and dependency choices optimize for size and determinism.
|
||||
- Convenience dependencies and broad abstractions can silently regress these goals.
|
||||
4. **Config and runtime contracts are user-facing API**
|
||||
- `src/config/schema.rs` and CLI commands are effectively public interfaces.
|
||||
- Backward compatibility and explicit migration matter.
|
||||
5. **The project now runs in high-concurrency collaboration mode**
|
||||
- CI + docs governance + label routing are part of the product delivery system.
|
||||
- PR throughput is a design constraint; not just a maintainer inconvenience.
|
||||
|
||||
## 3) Engineering Principles (Normative)
|
||||
|
||||
These principles are mandatory by default. They are not slogans; they are implementation constraints.
|
||||
|
||||
### 3.1 KISS (Keep It Simple, Stupid)
|
||||
|
||||
**Why here:** Runtime + security behavior must stay auditable under pressure.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer straightforward control flow over clever meta-programming.
|
||||
- Prefer explicit match branches and typed structs over hidden dynamic behavior.
|
||||
- Keep error paths obvious and localized.
|
||||
|
||||
### 3.2 YAGNI (You Aren't Gonna Need It)
|
||||
|
||||
**Why here:** Premature features increase attack surface and maintenance burden.
|
||||
|
||||
Required:
|
||||
|
||||
- Do not add new config keys, trait methods, feature flags, or workflow branches without a concrete accepted use case.
|
||||
- Do not introduce speculative “future-proof” abstractions without at least one current caller.
|
||||
- Keep unsupported paths explicit (error out) rather than adding partial fake support.
|
||||
|
||||
### 3.3 DRY + Rule of Three
|
||||
|
||||
**Why here:** Naive DRY can create brittle shared abstractions across providers/channels/tools.
|
||||
|
||||
Required:
|
||||
|
||||
- Duplicate small, local logic when it preserves clarity.
|
||||
- Extract shared utilities only after repeated, stable patterns (rule-of-three).
|
||||
- When extracting, preserve module boundaries and avoid hidden coupling.
|
||||
|
||||
### 3.4 SRP + ISP (Single Responsibility + Interface Segregation)
|
||||
|
||||
**Why here:** Trait-driven architecture already encodes subsystem boundaries.
|
||||
|
||||
Required:
|
||||
|
||||
- Keep each module focused on one concern.
|
||||
- Extend behavior by implementing existing narrow traits whenever possible.
|
||||
- Avoid fat interfaces and “god modules” that mix policy + transport + storage.
|
||||
|
||||
### 3.5 Fail Fast + Explicit Errors
|
||||
|
||||
**Why here:** Silent fallback in agent runtimes can create unsafe or costly behavior.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer explicit `bail!`/errors for unsupported or unsafe states.
|
||||
- Never silently broaden permissions/capabilities.
|
||||
- Document fallback behavior when fallback is intentional and safe.
|
||||
|
||||
### 3.6 Secure by Default + Least Privilege
|
||||
|
||||
**Why here:** Gateway/tools/runtime can execute actions with real-world side effects.
|
||||
|
||||
Required:
|
||||
|
||||
- Deny-by-default for access and exposure boundaries.
|
||||
- Never log secrets, raw tokens, or sensitive payloads.
|
||||
- Keep network/filesystem/shell scope as narrow as possible unless explicitly justified.
|
||||
|
||||
### 3.7 Determinism + Reproducibility
|
||||
|
||||
**Why here:** Reliable CI and low-latency triage depend on deterministic behavior.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer reproducible commands and locked dependency behavior in CI-sensitive paths.
|
||||
- Keep tests deterministic (no flaky timing/network dependence without guardrails).
|
||||
- Ensure local validation commands map to CI expectations.
|
||||
|
||||
### 3.8 Reversibility + Rollback-First Thinking
|
||||
|
||||
**Why here:** Fast recovery is mandatory under high PR volume.
|
||||
|
||||
Required:
|
||||
|
||||
- Keep changes easy to revert (small scope, clear blast radius).
|
||||
- For risky changes, define rollback path before merge.
|
||||
- Avoid mixed mega-patches that block safe rollback.
|
||||
|
||||
## 4) Repository Map (High-Level)
|
||||
|
||||
- `src/main.rs` — CLI entrypoint and command routing
|
||||
- `src/lib.rs` — module exports and shared command enums
|
||||
- `src/config/` — schema + config loading/merging
|
||||
- `src/agent/` — orchestration loop
|
||||
- `src/gateway/` — webhook/gateway server
|
||||
- `src/security/` — policy, pairing, secret store
|
||||
- `src/memory/` — markdown/sqlite memory backends + embeddings/vector merge
|
||||
- `src/providers/` — model providers and resilient wrapper
|
||||
- `src/channels/` — Telegram/Discord/Slack/etc channels
|
||||
- `src/tools/` — tool execution surface (shell, file, memory, browser)
|
||||
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO); see `docs/hardware-peripherals-design.md`
|
||||
- `src/runtime/` — runtime adapters (currently native)
|
||||
- `docs/` — task-oriented documentation system (hubs, unified TOC, references, operations, security proposals, multilingual guides)
|
||||
- `.github/` — CI, templates, automation workflows
|
||||
|
||||
## 4.1 Documentation System Contract (Required)
|
||||
|
||||
Treat documentation as a first-class product surface, not a post-merge artifact.
|
||||
|
||||
Canonical entry points:
|
||||
|
||||
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`, `README.fr.md`, `README.vi.md`
|
||||
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`, `docs/README.fr.md`, `docs/i18n/vi/README.md`
|
||||
- unified TOC: `docs/SUMMARY.md`
|
||||
|
||||
Supported locales (current contract):
|
||||
|
||||
- `en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`
|
||||
|
||||
Collection indexes (category navigation):
|
||||
|
||||
- `docs/getting-started/README.md`
|
||||
- `docs/reference/README.md`
|
||||
- `docs/operations/README.md`
|
||||
- `docs/security/README.md`
|
||||
- `docs/hardware/README.md`
|
||||
- `docs/contributing/README.md`
|
||||
- `docs/project/README.md`
|
||||
|
||||
Runtime-contract references (must track behavior changes):
|
||||
|
||||
- `docs/commands-reference.md`
|
||||
- `docs/providers-reference.md`
|
||||
- `docs/channels-reference.md`
|
||||
- `docs/config-reference.md`
|
||||
- `docs/operations-runbook.md`
|
||||
- `docs/troubleshooting.md`
|
||||
- `docs/one-click-bootstrap.md`
|
||||
|
||||
Required docs governance rules:
|
||||
|
||||
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
|
||||
- Keep entry-point parity across all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when changing navigation architecture.
|
||||
- If a change touches docs IA, runtime-contract references, or user-facing wording in shared docs, perform i18n follow-through for currently supported locales in the same PR:
|
||||
- Update locale navigation links (`README*`, `docs/README*`, `docs/SUMMARY.md`).
|
||||
- Update localized runtime-contract docs where equivalents exist (at minimum `commands-reference`, `config-reference`, `troubleshooting` for `fr` and `vi`).
|
||||
- For Vietnamese, treat `docs/i18n/vi/**` as canonical. Keep `docs/*.<locale>.md` compatibility shims aligned if present.
|
||||
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
|
||||
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
|
||||
|
||||
## 5) Risk Tiers by Path (Review Depth Contract)
|
||||
|
||||
Use these tiers when deciding validation depth and review rigor.
|
||||
|
||||
- **Low risk**: docs/chore/tests-only changes
|
||||
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
|
||||
- **High risk**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `src/tools/**`, `.github/workflows/**`, access-control boundaries
|
||||
|
||||
When uncertain, classify as higher risk.
|
||||
|
||||
## 6) Agent Workflow (Required)
|
||||
|
||||
1. **Read before write**
|
||||
- Inspect existing module, factory wiring, and adjacent tests before editing.
|
||||
2. **Define scope boundary**
|
||||
- One concern per PR; avoid mixed feature+refactor+infra patches.
|
||||
3. **Implement minimal patch**
|
||||
- Apply KISS/YAGNI/DRY rule-of-three explicitly.
|
||||
4. **Validate by risk tier**
|
||||
- Docs-only: lightweight checks.
|
||||
- Code/risky changes: full relevant checks and focused scenarios.
|
||||
5. **Document impact**
|
||||
- Update docs/PR notes for behavior, risk, side effects, and rollback.
|
||||
- If CLI/config/provider/channel behavior changed, update corresponding runtime-contract references.
|
||||
- If docs entry points changed, keep all supported locale README/docs-hub navigation aligned (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`).
|
||||
6. **Respect queue hygiene**
|
||||
- If stacked PR: declare `Depends on #...`.
|
||||
- If replacing old PR: declare `Supersedes #...`.
|
||||
|
||||
### 6.1 Branch / Commit / PR Flow (Required)
|
||||
|
||||
All contributors (human or agent) must follow the same collaboration flow:
|
||||
|
||||
- Create and work from a non-`main` branch.
|
||||
- Commit changes to that branch with clear, scoped commit messages.
|
||||
- Open a PR to `dev`; do not push directly to `dev` or `main`.
|
||||
- `main` is reserved for release promotion PRs from `dev`.
|
||||
- Wait for required checks and review outcomes before merging.
|
||||
- Merge via PR controls (squash/rebase/merge as repository policy allows).
|
||||
- Branch deletion after merge is optional; long-lived branches are allowed when intentionally maintained.
|
||||
|
||||
### 6.2 Worktree Workflow (Required for Multi-Track Agent Work)
|
||||
|
||||
Use Git worktrees to isolate concurrent agent/human tracks safely and predictably:
|
||||
|
||||
- Use one worktree per active branch/PR stream to avoid cross-task contamination.
|
||||
- Keep each worktree on a single branch; do not mix unrelated edits in one worktree.
|
||||
- Run validation commands inside the corresponding worktree before commit/PR.
|
||||
- Name worktrees clearly by scope (for example: `wt/ci-hardening`, `wt/provider-fix`) and remove stale worktrees when no longer needed.
|
||||
- PR checkpoint rules from section 6.1 still apply to worktree-based development.
|
||||
|
||||
### 6.3 Code Naming Contract (Required)
|
||||
|
||||
Apply these naming rules for all code changes unless a subsystem has a stronger existing pattern.
|
||||
|
||||
- Use Rust standard casing consistently: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants/statics `SCREAMING_SNAKE_CASE`.
|
||||
- Name types and modules by domain role, not implementation detail (for example `DiscordChannel`, `SecurityPolicy`, `MemoryStore` over vague names like `Manager`/`Helper`).
|
||||
- Keep trait implementer naming explicit and predictable: `<ProviderName>Provider`, `<ChannelName>Channel`, `<ToolName>Tool`, `<BackendName>Memory`.
|
||||
- Keep factory registration keys stable, lowercase, and user-facing (for example `"openai"`, `"discord"`, `"shell"`), and avoid alias sprawl without migration need.
|
||||
- Name tests by behavior/outcome (`<subject>_<expected_behavior>`) and keep fixture identifiers neutral/project-scoped.
|
||||
- If identity-like naming is required in tests/examples, use ZeroClaw-native labels only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
|
||||
|
||||
### 6.4 Architecture Boundary Contract (Required)
|
||||
|
||||
Use these rules to keep the trait/factory architecture stable under growth.
|
||||
|
||||
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
|
||||
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
|
||||
- Avoid creating cross-subsystem coupling (for example provider code importing channel internals, tool code mutating gateway policy directly).
|
||||
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
|
||||
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller in current scope.
|
||||
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
|
||||
|
||||
## 7) Change Playbooks
|
||||
|
||||
### 7.1 Adding a Provider
|
||||
|
||||
- Implement `Provider` in `src/providers/`.
|
||||
- Register in `src/providers/mod.rs` factory.
|
||||
- Add focused tests for factory wiring and error paths.
|
||||
- Avoid provider-specific behavior leaks into shared orchestration code.
|
||||
|
||||
### 7.2 Adding a Channel
|
||||
|
||||
- Implement `Channel` in `src/channels/`.
|
||||
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
|
||||
- Cover auth/allowlist/health behavior with tests.
|
||||
|
||||
### 7.3 Adding a Tool
|
||||
|
||||
- Implement `Tool` in `src/tools/` with strict parameter schema.
|
||||
- Validate and sanitize all inputs.
|
||||
- Return structured `ToolResult`; avoid panics in runtime path.
|
||||
|
||||
### 7.4 Adding a Peripheral
|
||||
|
||||
- Implement `Peripheral` in `src/peripherals/`.
|
||||
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
|
||||
- Register board type in config schema if needed.
|
||||
- See `docs/hardware-peripherals-design.md` for protocol and firmware notes.
|
||||
|
||||
### 7.5 Security / Runtime / Gateway Changes
|
||||
|
||||
- Include threat/risk notes and rollback strategy.
|
||||
- Add/update tests or validation evidence for failure modes and boundaries.
|
||||
- Keep observability useful but non-sensitive.
|
||||
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/actions-source-policy.md` when sources change.
|
||||
|
||||
### 7.6 Docs System / README / IA Changes
|
||||
|
||||
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
|
||||
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
|
||||
- When runtime surfaces change, update related references (`commands/providers/channels/config/runbook/troubleshooting`).
|
||||
- Keep multilingual entry-point parity for all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when nav or key wording changes.
|
||||
- When shared docs wording changes, sync corresponding localized docs for supported locales in the same PR (or explicitly document deferral and follow-up PR).
|
||||
- For docs snapshots, add new date-stamped files for new sprints rather than rewriting historical context.
|
||||
|
||||
|
||||
## 8) Validation Matrix
|
||||
|
||||
Default local checks for code changes:
|
||||
|
||||
```bash
|
||||
cargo fmt --all -- --check
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
cargo test
|
||||
```
|
||||
|
||||
Preferred local pre-PR validation path (recommended, not required):
|
||||
|
||||
```bash
|
||||
./dev/ci.sh all
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Local Docker-based CI is strongly recommended when Docker is available.
|
||||
- Contributors are not blocked from opening a PR if local Docker CI is unavailable; in that case run the most relevant native checks and document what was run.
|
||||
|
||||
Additional expectations by change type:
|
||||
|
||||
- **Docs/template-only**:
|
||||
- run markdown lint and link-integrity checks
|
||||
- if touching README/docs-hub/SUMMARY/collection indexes, verify EN/ZH/JA/RU navigation parity
|
||||
- if touching bootstrap docs/scripts, run `bash -n bootstrap.sh scripts/bootstrap.sh scripts/install.sh`
|
||||
- **Workflow changes**: validate YAML syntax; run workflow lint/sanity checks when available.
|
||||
- **Security/runtime/gateway/tools**: include at least one boundary/failure-mode validation.
|
||||
|
||||
If full checks are impractical, run the most relevant subset and document what was skipped and why.
|
||||
|
||||
## 9) Collaboration and PR Discipline
|
||||
|
||||
- Follow `.github/pull_request_template.md` fully (including side effects / blast radius).
|
||||
- Keep PR descriptions concrete: problem, change, non-goals, risk, rollback.
|
||||
- Use conventional commit titles.
|
||||
- Prefer small PRs (`size: XS/S/M`) when possible.
|
||||
- Agent-assisted PRs are welcome, **but contributors remain accountable for understanding what their code will do**.
|
||||
|
||||
### 9.1 Privacy/Sensitive Data and Neutral Wording (Required)
|
||||
|
||||
Treat privacy and neutrality as merge gates, not best-effort guidelines.
|
||||
|
||||
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
|
||||
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
|
||||
- Use neutral project-scoped placeholders (for example: `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
|
||||
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
|
||||
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`) and avoid real-world personas.
|
||||
- Recommended identity-safe naming palette (use when identity-like context is required):
|
||||
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
|
||||
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
|
||||
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
|
||||
- If reproducing external incidents, redact and anonymize all payloads before committing.
|
||||
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
|
||||
|
||||
### 9.2 Superseded-PR Attribution (Required)
|
||||
|
||||
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
|
||||
|
||||
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
|
||||
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email) so attribution is rendered correctly.
|
||||
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
|
||||
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
|
||||
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
|
||||
|
||||
### 9.3 Superseded-PR PR Template (Recommended)
|
||||
|
||||
When superseding multiple PRs, use a consistent title/body structure to reduce reviewer ambiguity.
|
||||
|
||||
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
|
||||
- If this is docs/chore/meta only, keep the same supersede suffix and use the appropriate conventional-commit type.
|
||||
- In the PR body, include the following template (fill placeholders, remove non-applicable lines):
|
||||
|
||||
```md
|
||||
## Supersedes
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
- #<pr_n> by @<author_n>
|
||||
|
||||
## Integrated Scope
|
||||
- From #<pr_a>: <what was materially incorporated>
|
||||
- From #<pr_b>: <what was materially incorporated>
|
||||
- From #<pr_n>: <what was materially incorporated>
|
||||
|
||||
## Attribution
|
||||
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
|
||||
- If No, explain why (for example: no direct code/design carry-over)
|
||||
|
||||
## Non-goals
|
||||
- <explicitly list what was not carried over>
|
||||
|
||||
## Risk and Rollback
|
||||
- Risk: <summary>
|
||||
- Rollback: <revert commit/PR strategy>
|
||||
```
|
||||
|
||||
### 9.4 Superseded-PR Commit Template (Recommended)
|
||||
|
||||
When a commit unifies or supersedes prior PR work, use a deterministic commit message layout so attribution is machine-parsed and reviewer-friendly.
|
||||
|
||||
- Keep one blank line between message sections, and exactly one blank line before trailer lines.
|
||||
- Keep each trailer on its own line; do not wrap, indent, or encode as escaped `\n` text.
|
||||
- Add one `Co-authored-by` trailer per materially incorporated contributor, using GitHub-recognized email.
|
||||
- If no direct code/design is carried over, omit `Co-authored-by` and explain attribution in the PR body instead.
|
||||
|
||||
```text
|
||||
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
|
||||
|
||||
<one-paragraph summary of integrated outcome>
|
||||
|
||||
Supersedes:
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
- #<pr_n> by @<author_n>
|
||||
|
||||
Integrated scope:
|
||||
- <subsystem_or_feature_a>: from #<pr_x>
|
||||
- <subsystem_or_feature_b>: from #<pr_y>
|
||||
|
||||
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
|
||||
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
|
||||
```
|
||||
|
||||
Reference docs:
|
||||
|
||||
- `CONTRIBUTING.md`
|
||||
- `docs/README.md`
|
||||
- `docs/SUMMARY.md`
|
||||
- `docs/docs-inventory.md`
|
||||
- `docs/commands-reference.md`
|
||||
- `docs/providers-reference.md`
|
||||
- `docs/channels-reference.md`
|
||||
- `docs/config-reference.md`
|
||||
- `docs/operations-runbook.md`
|
||||
- `docs/troubleshooting.md`
|
||||
- `docs/one-click-bootstrap.md`
|
||||
- `docs/pr-workflow.md`
|
||||
- `docs/reviewer-playbook.md`
|
||||
- `docs/ci-map.md`
|
||||
- `docs/actions-source-policy.md`
|
||||
|
||||
## 10) Anti-Patterns (Do Not)
|
||||
|
||||
- Do not add heavy dependencies for minor convenience.
|
||||
- Do not silently weaken security policy or access constraints.
|
||||
- Do not add speculative config/feature flags “just in case”.
|
||||
- Do not mix massive formatting-only changes with functional changes.
|
||||
- Do not modify unrelated modules “while here”.
|
||||
- Do not bypass failing checks without explicit explanation.
|
||||
- Do not hide behavior-changing side effects in refactor commits.
|
||||
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
|
||||
|
||||
## 11) Handoff Template (Agent -> Agent / Maintainer)
|
||||
|
||||
When handing off work, include:
|
||||
|
||||
1. What changed
|
||||
2. What did not change
|
||||
3. Validation run and results
|
||||
4. Remaining risks / unknowns
|
||||
5. Next recommended action
|
||||
|
||||
## 12) Vibe Coding Guardrails
|
||||
|
||||
When working in fast iterative mode:
|
||||
|
||||
- Keep each iteration reversible (small commits, clear rollback).
|
||||
- Validate assumptions with code search before implementing.
|
||||
- Prefer deterministic behavior over clever shortcuts.
|
||||
- Do not “ship and hope” on security-sensitive paths.
|
||||
- If uncertain, leave a concrete TODO with verification context, not a hidden guess.
|
||||
@@ -1,67 +1 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to ZeroClaw will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Security
|
||||
- **Legacy XOR cipher migration**: The `enc:` prefix (XOR cipher) is now deprecated.
|
||||
Secrets using this format will be automatically migrated to `enc2:` (ChaCha20-Poly1305 AEAD)
|
||||
when decrypted via `decrypt_and_migrate()`. A `tracing::warn!` is emitted when legacy
|
||||
values are encountered. The XOR cipher will be removed in a future release.
|
||||
|
||||
### Added
|
||||
- `SecretStore::decrypt_and_migrate()` — Decrypts secrets and returns a migrated `enc2:`
|
||||
value if the input used the legacy `enc:` format
|
||||
- `SecretStore::needs_migration()` — Check if a value uses the legacy `enc:` format
|
||||
- `SecretStore::is_secure_encrypted()` — Check if a value uses the secure `enc2:` format
|
||||
- **Telegram mention_only mode** — New config option `mention_only` for Telegram channel.
|
||||
When enabled, bot only responds to messages that @-mention the bot in group chats.
|
||||
Direct messages always work regardless of this setting. Default: `false`.
|
||||
|
||||
### Deprecated
|
||||
- `enc:` prefix for encrypted secrets — Use `enc2:` (ChaCha20-Poly1305) instead.
|
||||
Legacy values are still decrypted for backward compatibility but should be migrated.
|
||||
|
||||
### Fixed
|
||||
- **Gemini thinking model support** — Responses from thinking models (e.g. `gemini-3-pro-preview`)
|
||||
are now handled correctly. The provider skips internal reasoning parts (`thought: true`) and
|
||||
signature parts (`thoughtSignature`), extracting only the final answer text. Falls back to
|
||||
thinking content when no non-thinking response is available.
|
||||
- Updated default gateway port to `42617`.
|
||||
- Removed all user-facing references to port `3000`.
|
||||
- **Onboarding channel menu dispatch** now uses an enum-backed selector instead of hard-coded
|
||||
numeric match arms, preventing duplicated pattern arms and related `unreachable pattern`
|
||||
compiler warnings in `src/onboard/wizard.rs`.
|
||||
- **OpenAI native tool spec parsing** now uses owned serializable/deserializable structs,
|
||||
fixing a compile-time type mismatch when validating tool schemas before API calls.
|
||||
|
||||
## [0.1.0] - 2026-02-13
|
||||
|
||||
### Added
|
||||
- **Core Architecture**: Trait-based pluggable system for Provider, Channel, Observer, RuntimeAdapter, Tool
|
||||
- **Provider**: OpenRouter implementation (access Claude, GPT-4, Llama, Gemini via single API)
|
||||
- **Channels**: CLI channel with interactive and single-message modes
|
||||
- **Observability**: NoopObserver (zero overhead), LogObserver (tracing), MultiObserver (fan-out)
|
||||
- **Security**: Workspace sandboxing, command allowlisting, path traversal blocking, autonomy levels (ReadOnly/Supervised/Full), rate limiting
|
||||
- **Tools**: Shell (sandboxed), FileRead (path-checked), FileWrite (path-checked)
|
||||
- **Memory (Brain)**: SQLite persistent backend (searchable, survives restarts), Markdown backend (plain files, human-readable)
|
||||
- **Heartbeat Engine**: Periodic task execution from HEARTBEAT.md
|
||||
- **Runtime**: Native adapter for Mac/Linux/Raspberry Pi
|
||||
- **Config**: TOML-based configuration with sensible defaults
|
||||
- **Onboarding**: Interactive CLI wizard with workspace scaffolding
|
||||
- **CLI Commands**: agent, gateway, status, cron, channel, tools, onboard
|
||||
- **CI/CD**: GitHub Actions with cross-platform builds (Linux, macOS Intel/ARM, Windows)
|
||||
- **Tests**: 159 inline tests covering all modules and edge cases
|
||||
- **Binary**: 3.1MB optimized release build (includes bundled SQLite)
|
||||
|
||||
### Security
|
||||
- Path traversal attack prevention
|
||||
- Command injection blocking
|
||||
- Workspace escape prevention
|
||||
- Forbidden system path protection (`/etc`, `/root`, `~/.ssh`)
|
||||
|
||||
[0.1.0]: https://github.com/theonlyhennygod/zeroclaw/releases/tag/v0.1.0
|
||||
|
||||
@@ -1,20 +1,26 @@
|
||||
# CLAUDE.md — ZeroClaw Agent Engineering Protocol
|
||||
# CLAUDE.md — ZeroClaw
|
||||
|
||||
This file defines the default working protocol for Claude agents in this repository.
|
||||
Scope: entire repository.
|
||||
## Commands
|
||||
|
||||
## 1) Project Snapshot (Read First)
|
||||
```bash
|
||||
cargo fmt --all -- --check
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
cargo test
|
||||
```
|
||||
|
||||
ZeroClaw is a Rust-first autonomous agent runtime optimized for:
|
||||
Full pre-PR validation (recommended):
|
||||
|
||||
- high performance
|
||||
- high efficiency
|
||||
- high stability
|
||||
- high extensibility
|
||||
- high sustainability
|
||||
- high security
|
||||
```bash
|
||||
./dev/ci.sh all
|
||||
```
|
||||
|
||||
Core architecture is trait-driven and modular. Most extension work should be done by implementing traits and registering in factory modules.
|
||||
Docs-only changes: run markdown lint and link-integrity checks. If touching bootstrap scripts: `bash -n install.sh`.
|
||||
|
||||
## Project Snapshot
|
||||
|
||||
ZeroClaw is a Rust-first autonomous agent runtime optimized for performance, efficiency, stability, extensibility, sustainability, and security.
|
||||
|
||||
Core architecture is trait-driven and modular. Extend by implementing traits and registering in factory modules.
|
||||
|
||||
Key extension points:
|
||||
|
||||
@@ -26,111 +32,7 @@ Key extension points:
|
||||
- `src/runtime/traits.rs` (`RuntimeAdapter`)
|
||||
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
|
||||
|
||||
## 2) Deep Architecture Observations (Why This Protocol Exists)
|
||||
|
||||
These codebase realities should drive every design decision:
|
||||
|
||||
1. **Trait + factory architecture is the stability backbone**
|
||||
- Extension points are intentionally explicit and swappable.
|
||||
- Most features should be added via trait implementation + factory registration, not cross-cutting rewrites.
|
||||
2. **Security-critical surfaces are first-class and internet-adjacent**
|
||||
- `src/gateway/`, `src/security/`, `src/tools/`, `src/runtime/` carry high blast radius.
|
||||
- Defaults already lean secure-by-default (pairing, bind safety, limits, secret handling); keep it that way.
|
||||
3. **Performance and binary size are product goals, not nice-to-have**
|
||||
- `Cargo.toml` release profile and dependency choices optimize for size and determinism.
|
||||
- Convenience dependencies and broad abstractions can silently regress these goals.
|
||||
4. **Config and runtime contracts are user-facing API**
|
||||
- `src/config/schema.rs` and CLI commands are effectively public interfaces.
|
||||
- Backward compatibility and explicit migration matter.
|
||||
5. **The project now runs in high-concurrency collaboration mode**
|
||||
- CI + docs governance + label routing are part of the product delivery system.
|
||||
- PR throughput is a design constraint; not just a maintainer inconvenience.
|
||||
|
||||
## 3) Engineering Principles (Normative)
|
||||
|
||||
These principles are mandatory by default. They are not slogans; they are implementation constraints.
|
||||
|
||||
### 3.1 KISS (Keep It Simple, Stupid)
|
||||
|
||||
**Why here:** Runtime + security behavior must stay auditable under pressure.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer straightforward control flow over clever meta-programming.
|
||||
- Prefer explicit match branches and typed structs over hidden dynamic behavior.
|
||||
- Keep error paths obvious and localized.
|
||||
|
||||
### 3.2 YAGNI (You Aren't Gonna Need It)
|
||||
|
||||
**Why here:** Premature features increase attack surface and maintenance burden.
|
||||
|
||||
Required:
|
||||
|
||||
- Do not add new config keys, trait methods, feature flags, or workflow branches without a concrete accepted use case.
|
||||
- Do not introduce speculative “future-proof” abstractions without at least one current caller.
|
||||
- Keep unsupported paths explicit (error out) rather than adding partial fake support.
|
||||
|
||||
### 3.3 DRY + Rule of Three
|
||||
|
||||
**Why here:** Naive DRY can create brittle shared abstractions across providers/channels/tools.
|
||||
|
||||
Required:
|
||||
|
||||
- Duplicate small, local logic when it preserves clarity.
|
||||
- Extract shared utilities only after repeated, stable patterns (rule-of-three).
|
||||
- When extracting, preserve module boundaries and avoid hidden coupling.
|
||||
|
||||
### 3.4 SRP + ISP (Single Responsibility + Interface Segregation)
|
||||
|
||||
**Why here:** Trait-driven architecture already encodes subsystem boundaries.
|
||||
|
||||
Required:
|
||||
|
||||
- Keep each module focused on one concern.
|
||||
- Extend behavior by implementing existing narrow traits whenever possible.
|
||||
- Avoid fat interfaces and “god modules” that mix policy + transport + storage.
|
||||
|
||||
### 3.5 Fail Fast + Explicit Errors
|
||||
|
||||
**Why here:** Silent fallback in agent runtimes can create unsafe or costly behavior.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer explicit `bail!`/errors for unsupported or unsafe states.
|
||||
- Never silently broaden permissions/capabilities.
|
||||
- Document fallback behavior when fallback is intentional and safe.
|
||||
|
||||
### 3.6 Secure by Default + Least Privilege
|
||||
|
||||
**Why here:** Gateway/tools/runtime can execute actions with real-world side effects.
|
||||
|
||||
Required:
|
||||
|
||||
- Deny-by-default for access and exposure boundaries.
|
||||
- Never log secrets, raw tokens, or sensitive payloads.
|
||||
- Keep network/filesystem/shell scope as narrow as possible unless explicitly justified.
|
||||
|
||||
### 3.7 Determinism + Reproducibility
|
||||
|
||||
**Why here:** Reliable CI and low-latency triage depend on deterministic behavior.
|
||||
|
||||
Required:
|
||||
|
||||
- Prefer reproducible commands and locked dependency behavior in CI-sensitive paths.
|
||||
- Keep tests deterministic (no flaky timing/network dependence without guardrails).
|
||||
- Ensure local validation commands map to CI expectations.
|
||||
|
||||
### 3.8 Reversibility + Rollback-First Thinking
|
||||
|
||||
**Why here:** Fast recovery is mandatory under high PR volume.
|
||||
|
||||
Required:
|
||||
|
||||
- Keep changes easy to revert (small scope, clear blast radius).
|
||||
- For risky changes, define rollback path before merge.
|
||||
- Avoid mixed mega-patches that block safe rollback.
|
||||
|
||||
## 4) Repository Map (High-Level)
|
||||
## Repository Map
|
||||
|
||||
- `src/main.rs` — CLI entrypoint and command routing
|
||||
- `src/lib.rs` — module exports and shared command enums
|
||||
@@ -142,59 +44,12 @@ Required:
|
||||
- `src/providers/` — model providers and resilient wrapper
|
||||
- `src/channels/` — Telegram/Discord/Slack/etc channels
|
||||
- `src/tools/` — tool execution surface (shell, file, memory, browser)
|
||||
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO); see `docs/hardware-peripherals-design.md`
|
||||
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO)
|
||||
- `src/runtime/` — runtime adapters (currently native)
|
||||
- `docs/` — task-oriented documentation system (hubs, unified TOC, references, operations, security proposals, multilingual guides)
|
||||
- `docs/` — topic-based documentation (setup-guides, reference, ops, security, hardware, contributing, maintainers)
|
||||
- `.github/` — CI, templates, automation workflows
|
||||
|
||||
## 4.1 Documentation System Contract (Required)
|
||||
|
||||
Treat documentation as a first-class product surface, not a post-merge artifact.
|
||||
|
||||
Canonical entry points:
|
||||
|
||||
- root READMEs: `README.md`, `README.zh-CN.md`, `README.ja.md`, `README.ru.md`, `README.fr.md`, `README.vi.md`
|
||||
- docs hubs: `docs/README.md`, `docs/README.zh-CN.md`, `docs/README.ja.md`, `docs/README.ru.md`, `docs/README.fr.md`, `docs/i18n/vi/README.md`
|
||||
- unified TOC: `docs/SUMMARY.md`
|
||||
|
||||
Supported locales (current contract):
|
||||
|
||||
- `en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`
|
||||
|
||||
Collection indexes (category navigation):
|
||||
|
||||
- `docs/getting-started/README.md`
|
||||
- `docs/reference/README.md`
|
||||
- `docs/operations/README.md`
|
||||
- `docs/security/README.md`
|
||||
- `docs/hardware/README.md`
|
||||
- `docs/contributing/README.md`
|
||||
- `docs/project/README.md`
|
||||
|
||||
Runtime-contract references (must track behavior changes):
|
||||
|
||||
- `docs/commands-reference.md`
|
||||
- `docs/providers-reference.md`
|
||||
- `docs/channels-reference.md`
|
||||
- `docs/config-reference.md`
|
||||
- `docs/operations-runbook.md`
|
||||
- `docs/troubleshooting.md`
|
||||
- `docs/one-click-bootstrap.md`
|
||||
|
||||
Required docs governance rules:
|
||||
|
||||
- Keep README/hub top navigation and quick routes intuitive and non-duplicative.
|
||||
- Keep entry-point parity across all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when changing navigation architecture.
|
||||
- If a change touches docs IA, runtime-contract references, or user-facing wording in shared docs, perform i18n follow-through for currently supported locales in the same PR:
|
||||
- Update locale navigation links (`README*`, `docs/README*`, `docs/SUMMARY.md`).
|
||||
- Update localized runtime-contract docs where equivalents exist (at minimum `commands-reference`, `config-reference`, `troubleshooting` for `fr` and `vi`).
|
||||
- For Vietnamese, treat `docs/i18n/vi/**` as canonical. Keep `docs/*.<locale>.md` compatibility shims aligned if present.
|
||||
- Keep proposal/roadmap docs explicitly labeled; avoid mixing proposal text into runtime-contract docs.
|
||||
- Keep project snapshots date-stamped and immutable once superseded by a newer date.
|
||||
|
||||
## 5) Risk Tiers by Path (Review Depth Contract)
|
||||
|
||||
Use these tiers when deciding validation depth and review rigor.
|
||||
## Risk Tiers
|
||||
|
||||
- **Low risk**: docs/chore/tests-only changes
|
||||
- **Medium risk**: most `src/**` behavior changes without boundary/security impact
|
||||
@@ -202,282 +57,34 @@ Use these tiers when deciding validation depth and review rigor.
|
||||
|
||||
When uncertain, classify as higher risk.
|
||||
|
||||
## 6) Agent Workflow (Required)
|
||||
## Workflow
|
||||
|
||||
1. **Read before write**
|
||||
- Inspect existing module, factory wiring, and adjacent tests before editing.
|
||||
2. **Define scope boundary**
|
||||
- One concern per PR; avoid mixed feature+refactor+infra patches.
|
||||
3. **Implement minimal patch**
|
||||
- Apply KISS/YAGNI/DRY rule-of-three explicitly.
|
||||
4. **Validate by risk tier**
|
||||
- Docs-only: lightweight checks.
|
||||
- Code/risky changes: full relevant checks and focused scenarios.
|
||||
5. **Document impact**
|
||||
- Update docs/PR notes for behavior, risk, side effects, and rollback.
|
||||
- If CLI/config/provider/channel behavior changed, update corresponding runtime-contract references.
|
||||
- If docs entry points changed, keep all supported locale README/docs-hub navigation aligned (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`).
|
||||
6. **Respect queue hygiene**
|
||||
- If stacked PR: declare `Depends on #...`.
|
||||
- If replacing old PR: declare `Supersedes #...`.
|
||||
1. **Read before write** — inspect existing module, factory wiring, and adjacent tests before editing.
|
||||
2. **One concern per PR** — avoid mixed feature+refactor+infra patches.
|
||||
3. **Implement minimal patch** — no speculative abstractions, no config keys without a concrete use case.
|
||||
4. **Validate by risk tier** — docs-only: lightweight checks. Code changes: full relevant checks.
|
||||
5. **Document impact** — update PR notes for behavior, risk, side effects, and rollback.
|
||||
6. **Queue hygiene** — stacked PR: declare `Depends on #...`. Replacing old PR: declare `Supersedes #...`.
|
||||
|
||||
### 6.1 Branch / Commit / PR Flow (Required)
|
||||
Branch/commit/PR rules:
|
||||
- Work from a non-`master` branch. Open a PR to `master`; do not push directly.
|
||||
- Use conventional commit titles. Prefer small PRs (`size: XS/S/M`).
|
||||
- Follow `.github/pull_request_template.md` fully.
|
||||
- Never commit secrets, personal data, or real identity information (see `@docs/contributing/pr-discipline.md`).
|
||||
|
||||
All contributors (human or agent) must follow the same collaboration flow:
|
||||
|
||||
- Create and work from a non-`main` branch.
|
||||
- Commit changes to that branch with clear, scoped commit messages.
|
||||
- Open a PR to `main`; do not push directly to `main`.
|
||||
- Wait for required checks and review outcomes before merging.
|
||||
- Merge via PR controls (squash/rebase/merge as repository policy allows).
|
||||
- Branch deletion after merge is optional; long-lived branches are allowed when intentionally maintained.
|
||||
|
||||
### 6.2 Worktree Workflow (Required for Multi-Track Agent Work)
|
||||
|
||||
Use Git worktrees to isolate concurrent agent/human tracks safely and predictably:
|
||||
|
||||
- Use one worktree per active branch/PR stream to avoid cross-task contamination.
|
||||
- Keep each worktree on a single branch; do not mix unrelated edits in one worktree.
|
||||
- Run validation commands inside the corresponding worktree before commit/PR.
|
||||
- Name worktrees clearly by scope (for example: `wt/ci-hardening`, `wt/provider-fix`) and remove stale worktrees when no longer needed.
|
||||
- PR checkpoint rules from section 6.1 still apply to worktree-based development.
|
||||
|
||||
### 6.3 Code Naming Contract (Required)
|
||||
|
||||
Apply these naming rules for all code changes unless a subsystem has a stronger existing pattern.
|
||||
|
||||
- Use Rust standard casing consistently: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants/statics `SCREAMING_SNAKE_CASE`.
|
||||
- Name types and modules by domain role, not implementation detail (for example `DiscordChannel`, `SecurityPolicy`, `MemoryStore` over vague names like `Manager`/`Helper`).
|
||||
- Keep trait implementer naming explicit and predictable: `<ProviderName>Provider`, `<ChannelName>Channel`, `<ToolName>Tool`, `<BackendName>Memory`.
|
||||
- Keep factory registration keys stable, lowercase, and user-facing (for example `"openai"`, `"discord"`, `"shell"`), and avoid alias sprawl without migration need.
|
||||
- Name tests by behavior/outcome (`<subject>_<expected_behavior>`) and keep fixture identifiers neutral/project-scoped.
|
||||
- If identity-like naming is required in tests/examples, use ZeroClaw-native labels only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`).
|
||||
|
||||
### 6.4 Architecture Boundary Contract (Required)
|
||||
|
||||
Use these rules to keep the trait/factory architecture stable under growth.
|
||||
|
||||
- Extend capabilities by adding trait implementations + factory wiring first; avoid cross-module rewrites for isolated features.
|
||||
- Keep dependency direction inward to contracts: concrete integrations depend on trait/config/util layers, not on other concrete integrations.
|
||||
- Avoid creating cross-subsystem coupling (for example provider code importing channel internals, tool code mutating gateway policy directly).
|
||||
- Keep module responsibilities single-purpose: orchestration in `agent/`, transport in `channels/`, model I/O in `providers/`, policy in `security/`, execution in `tools/`.
|
||||
- Introduce new shared abstractions only after repeated use (rule-of-three), with at least one real caller in current scope.
|
||||
- For config/schema changes, treat keys as public contract: document defaults, compatibility impact, and migration/rollback path.
|
||||
|
||||
## 7) Change Playbooks
|
||||
|
||||
### 7.1 Adding a Provider
|
||||
|
||||
- Implement `Provider` in `src/providers/`.
|
||||
- Register in `src/providers/mod.rs` factory.
|
||||
- Add focused tests for factory wiring and error paths.
|
||||
- Avoid provider-specific behavior leaks into shared orchestration code.
|
||||
|
||||
### 7.2 Adding a Channel
|
||||
|
||||
- Implement `Channel` in `src/channels/`.
|
||||
- Keep `send`, `listen`, `health_check`, typing semantics consistent.
|
||||
- Cover auth/allowlist/health behavior with tests.
|
||||
|
||||
### 7.3 Adding a Tool
|
||||
|
||||
- Implement `Tool` in `src/tools/` with strict parameter schema.
|
||||
- Validate and sanitize all inputs.
|
||||
- Return structured `ToolResult`; avoid panics in runtime path.
|
||||
|
||||
### 7.4 Adding a Peripheral
|
||||
|
||||
- Implement `Peripheral` in `src/peripherals/`.
|
||||
- Peripherals expose `tools()` — each tool delegates to the hardware (GPIO, sensors, etc.).
|
||||
- Register board type in config schema if needed.
|
||||
- See `docs/hardware-peripherals-design.md` for protocol and firmware notes.
|
||||
|
||||
### 7.5 Security / Runtime / Gateway Changes
|
||||
|
||||
- Include threat/risk notes and rollback strategy.
|
||||
- Add/update tests or validation evidence for failure modes and boundaries.
|
||||
- Keep observability useful but non-sensitive.
|
||||
- For `.github/workflows/**` changes, include Actions allowlist impact in PR notes and update `docs/actions-source-policy.md` when sources change.
|
||||
|
||||
### 7.6 Docs System / README / IA Changes
|
||||
|
||||
- Treat docs navigation as product UX: preserve clear pathing from README -> docs hub -> SUMMARY -> category index.
|
||||
- Keep top-level nav concise; avoid duplicative links across adjacent nav blocks.
|
||||
- When runtime surfaces change, update related references (`commands/providers/channels/config/runbook/troubleshooting`).
|
||||
- Keep multilingual entry-point parity for all supported locales (`en`, `zh-CN`, `ja`, `ru`, `fr`, `vi`) when nav or key wording changes.
|
||||
- When shared docs wording changes, sync corresponding localized docs for supported locales in the same PR (or explicitly document deferral and follow-up PR).
|
||||
- For docs snapshots, add new date-stamped files for new sprints rather than rewriting historical context.
|
||||
|
||||
|
||||
## 8) Validation Matrix
|
||||
|
||||
Default local checks for code changes:
|
||||
|
||||
```bash
|
||||
cargo fmt --all -- --check
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
cargo test
|
||||
```
|
||||
|
||||
Preferred local pre-PR validation path (recommended, not required):
|
||||
|
||||
```bash
|
||||
./dev/ci.sh all
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Local Docker-based CI is strongly recommended when Docker is available.
|
||||
- Contributors are not blocked from opening a PR if local Docker CI is unavailable; in that case run the most relevant native checks and document what was run.
|
||||
|
||||
Additional expectations by change type:
|
||||
|
||||
- **Docs/template-only**:
|
||||
- run markdown lint and link-integrity checks
|
||||
- if touching README/docs-hub/SUMMARY/collection indexes, verify EN/ZH/JA/RU navigation parity
|
||||
- if touching bootstrap docs/scripts, run `bash -n bootstrap.sh scripts/bootstrap.sh scripts/install.sh`
|
||||
- **Workflow changes**: validate YAML syntax; run workflow lint/sanity checks when available.
|
||||
- **Security/runtime/gateway/tools**: include at least one boundary/failure-mode validation.
|
||||
|
||||
If full checks are impractical, run the most relevant subset and document what was skipped and why.
|
||||
|
||||
## 9) Collaboration and PR Discipline
|
||||
|
||||
- Follow `.github/pull_request_template.md` fully (including side effects / blast radius).
|
||||
- Keep PR descriptions concrete: problem, change, non-goals, risk, rollback.
|
||||
- Use conventional commit titles.
|
||||
- Prefer small PRs (`size: XS/S/M`) when possible.
|
||||
- Agent-assisted PRs are welcome, **but contributors remain accountable for understanding what their code will do**.
|
||||
|
||||
### 9.1 Privacy/Sensitive Data and Neutral Wording (Required)
|
||||
|
||||
Treat privacy and neutrality as merge gates, not best-effort guidelines.
|
||||
|
||||
- Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages.
|
||||
- Prohibited data includes (non-exhaustive): real names, personal emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, and private URLs.
|
||||
- Use neutral project-scoped placeholders (for example: `user_a`, `test_user`, `project_bot`, `example.com`) instead of real identity data.
|
||||
- Test names/messages/fixtures must be impersonal and system-focused; avoid first-person or identity-specific language.
|
||||
- If identity-like context is unavoidable, use ZeroClaw-scoped roles/labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`) and avoid real-world personas.
|
||||
- Recommended identity-safe naming palette (use when identity-like context is required):
|
||||
- actor labels: `ZeroClawAgent`, `ZeroClawOperator`, `ZeroClawMaintainer`, `zeroclaw_user`
|
||||
- service/runtime labels: `zeroclaw_bot`, `zeroclaw_service`, `zeroclaw_runtime`, `zeroclaw_node`
|
||||
- environment labels: `zeroclaw_project`, `zeroclaw_workspace`, `zeroclaw_channel`
|
||||
- If reproducing external incidents, redact and anonymize all payloads before committing.
|
||||
- Before push, review `git diff --cached` specifically for accidental sensitive strings and identity leakage.
|
||||
|
||||
### 9.2 Superseded-PR Attribution (Required)
|
||||
|
||||
When a PR supersedes another contributor's PR and carries forward substantive code or design decisions, preserve authorship explicitly.
|
||||
|
||||
- In the integrating commit message, add one `Co-authored-by: Name <email>` trailer per superseded contributor whose work is materially incorporated.
|
||||
- Use a GitHub-recognized email (`<login@users.noreply.github.com>` or the contributor's verified commit email) so attribution is rendered correctly.
|
||||
- Keep trailers on their own lines after a blank line at commit-message end; never encode them as escaped `\\n` text.
|
||||
- In the PR body, list superseded PR links and briefly state what was incorporated from each.
|
||||
- If no actual code/design was incorporated (only inspiration), do not use `Co-authored-by`; give credit in PR notes instead.
|
||||
|
||||
### 9.3 Superseded-PR PR Template (Recommended)
|
||||
|
||||
When superseding multiple PRs, use a consistent title/body structure to reduce reviewer ambiguity.
|
||||
|
||||
- Recommended title format: `feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]`
|
||||
- If this is docs/chore/meta only, keep the same supersede suffix and use the appropriate conventional-commit type.
|
||||
- In the PR body, include the following template (fill placeholders, remove non-applicable lines):
|
||||
|
||||
```md
|
||||
## Supersedes
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
- #<pr_n> by @<author_n>
|
||||
|
||||
## Integrated Scope
|
||||
- From #<pr_a>: <what was materially incorporated>
|
||||
- From #<pr_b>: <what was materially incorporated>
|
||||
- From #<pr_n>: <what was materially incorporated>
|
||||
|
||||
## Attribution
|
||||
- Co-authored-by trailers added for materially incorporated contributors: Yes/No
|
||||
- If No, explain why (for example: no direct code/design carry-over)
|
||||
|
||||
## Non-goals
|
||||
- <explicitly list what was not carried over>
|
||||
|
||||
## Risk and Rollback
|
||||
- Risk: <summary>
|
||||
- Rollback: <revert commit/PR strategy>
|
||||
```
|
||||
|
||||
### 9.4 Superseded-PR Commit Template (Recommended)
|
||||
|
||||
When a commit unifies or supersedes prior PR work, use a deterministic commit message layout so attribution is machine-parsed and reviewer-friendly.
|
||||
|
||||
- Keep one blank line between message sections, and exactly one blank line before trailer lines.
|
||||
- Keep each trailer on its own line; do not wrap, indent, or encode as escaped `\n` text.
|
||||
- Add one `Co-authored-by` trailer per materially incorporated contributor, using GitHub-recognized email.
|
||||
- If no direct code/design is carried over, omit `Co-authored-by` and explain attribution in the PR body instead.
|
||||
|
||||
```text
|
||||
feat(<scope>): unify and supersede #<pr_a>, #<pr_b> [and #<pr_n>]
|
||||
|
||||
<one-paragraph summary of integrated outcome>
|
||||
|
||||
Supersedes:
|
||||
- #<pr_a> by @<author_a>
|
||||
- #<pr_b> by @<author_b>
|
||||
- #<pr_n> by @<author_n>
|
||||
|
||||
Integrated scope:
|
||||
- <subsystem_or_feature_a>: from #<pr_x>
|
||||
- <subsystem_or_feature_b>: from #<pr_y>
|
||||
|
||||
Co-authored-by: <Name A> <login_a@users.noreply.github.com>
|
||||
Co-authored-by: <Name B> <login_b@users.noreply.github.com>
|
||||
```
|
||||
|
||||
Reference docs:
|
||||
|
||||
- `CONTRIBUTING.md`
|
||||
- `docs/README.md`
|
||||
- `docs/SUMMARY.md`
|
||||
- `docs/docs-inventory.md`
|
||||
- `docs/commands-reference.md`
|
||||
- `docs/providers-reference.md`
|
||||
- `docs/channels-reference.md`
|
||||
- `docs/config-reference.md`
|
||||
- `docs/operations-runbook.md`
|
||||
- `docs/troubleshooting.md`
|
||||
- `docs/one-click-bootstrap.md`
|
||||
- `docs/pr-workflow.md`
|
||||
- `docs/reviewer-playbook.md`
|
||||
- `docs/ci-map.md`
|
||||
- `docs/actions-source-policy.md`
|
||||
|
||||
## 10) Anti-Patterns (Do Not)
|
||||
## Anti-Patterns
|
||||
|
||||
- Do not add heavy dependencies for minor convenience.
|
||||
- Do not silently weaken security policy or access constraints.
|
||||
- Do not add speculative config/feature flags “just in case”.
|
||||
- Do not add speculative config/feature flags "just in case".
|
||||
- Do not mix massive formatting-only changes with functional changes.
|
||||
- Do not modify unrelated modules “while here”.
|
||||
- Do not modify unrelated modules "while here".
|
||||
- Do not bypass failing checks without explicit explanation.
|
||||
- Do not hide behavior-changing side effects in refactor commits.
|
||||
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
|
||||
|
||||
## 11) Handoff Template (Agent -> Agent / Maintainer)
|
||||
## Linked References
|
||||
|
||||
When handing off work, include:
|
||||
|
||||
1. What changed
|
||||
2. What did not change
|
||||
3. Validation run and results
|
||||
4. Remaining risks / unknowns
|
||||
5. Next recommended action
|
||||
|
||||
## 12) Vibe Coding Guardrails
|
||||
|
||||
When working in fast iterative mode:
|
||||
|
||||
- Keep each iteration reversible (small commits, clear rollback).
|
||||
- Validate assumptions with code search before implementing.
|
||||
- Prefer deterministic behavior over clever shortcuts.
|
||||
- Do not “ship and hope” on security-sensitive paths.
|
||||
- If uncertain, leave a concrete TODO with verification context, not a hidden guess.
|
||||
- `@docs/contributing/change-playbooks.md` — adding providers, channels, tools, peripherals; security/gateway changes; architecture boundaries
|
||||
- `@docs/contributing/pr-discipline.md` — privacy rules, superseded-PR attribution/templates, handoff template
|
||||
- `@docs/contributing/docs-contract.md` — docs system contract, i18n rules, locale parity
|
||||
|
||||
+1
-1
@@ -60,7 +60,7 @@ representative at an online or offline event.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
https://x.com/willsarg617.
|
||||
https://x.com/argenistherose.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
|
||||
+29
-13
@@ -2,6 +2,21 @@
|
||||
|
||||
Thanks for your interest in contributing to ZeroClaw! This guide will help you get started.
|
||||
|
||||
## Branching Model
|
||||
|
||||
> **Important — `master` is the default branch.**
|
||||
>
|
||||
> ZeroClaw uses **`master`** as its single source-of-truth branch. The `main` branch has been removed.
|
||||
>
|
||||
> Previously, some documentation and scripts referenced a `main` branch, which caused 404 errors and contributor confusion (see [#2929](https://github.com/zeroclaw-labs/zeroclaw/issues/2929), [#3061](https://github.com/zeroclaw-labs/zeroclaw/issues/3061), [#3194](https://github.com/zeroclaw-labs/zeroclaw/pull/3194)). As of March 2026, all references have been corrected and the `main` branch deleted.
|
||||
>
|
||||
> **How contributors should work:**
|
||||
> 1. Fork the repository
|
||||
> 2. Create a `feat/*` or `fix/*` branch from `master`
|
||||
> 3. Open a PR targeting `master`
|
||||
>
|
||||
> Do **not** create or push to a `main` branch.
|
||||
|
||||
## First-Time Contributors
|
||||
|
||||
Welcome — contributions of all sizes are valued. If this is your first contribution, here is how to get started:
|
||||
@@ -15,9 +30,9 @@ Welcome — contributions of all sizes are valued. If this is your first contrib
|
||||
|
||||
3. **Follow the fork → branch → change → test → PR workflow:**
|
||||
- Fork the repository and clone your fork
|
||||
- Create a feature branch (`git checkout -b fix/my-change`)
|
||||
- Create a feature branch (`git checkout -b feat/my-change` or `git checkout -b fix/my-change`)
|
||||
- Make your changes and run `cargo fmt && cargo clippy && cargo test`
|
||||
- Open a PR against `dev` using the PR template
|
||||
- Open a PR against `master` using the PR template
|
||||
|
||||
4. **Start with Track A.** ZeroClaw uses three [collaboration tracks](#collaboration-tracks-risk-based) (A/B/C) based on risk. First-time contributors should target **Track A** (docs, tests, chore) — these require lighter review and are the fastest path to a merged PR.
|
||||
|
||||
@@ -210,20 +225,20 @@ To keep docs useful under high PR volume, we use these rules:
|
||||
- **Side-effect visibility**: document blast radius, failure modes, and rollback before merge.
|
||||
- **Automation assists, humans decide**: bots triage and label, but merge accountability stays human.
|
||||
- **Index-first discoverability**: `docs/README.md` is the first entry point for operational documentation.
|
||||
- **Template-first authoring**: start new operational docs from `docs/doc-template.md`.
|
||||
- **Template-first authoring**: start new operational docs from `docs/contributing/doc-template.md`.
|
||||
|
||||
### Documentation System Map
|
||||
|
||||
| Doc | Primary purpose | When to update |
|
||||
|---|---|---|
|
||||
| `docs/README.md` | canonical docs index and taxonomy | add/remove docs or change documentation ownership/navigation |
|
||||
| `docs/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
|
||||
| `docs/contributing/doc-template.md` | standard skeleton for new operational documentation | when required sections or documentation quality bar changes |
|
||||
| `CONTRIBUTING.md` | contributor contract and readiness baseline | contributor expectations or policy changes |
|
||||
| `docs/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
|
||||
| `docs/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
|
||||
| `docs/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
|
||||
| `docs/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
|
||||
| `docs/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
|
||||
| `docs/contributing/pr-workflow.md` | governance logic and merge contract | workflow/risk/merge gate changes |
|
||||
| `docs/contributing/reviewer-playbook.md` | reviewer operating checklist | review depth or triage behavior changes |
|
||||
| `docs/contributing/ci-map.md` | CI ownership and triage entry points | workflow trigger/job ownership changes |
|
||||
| `docs/ops/network-deployment.md` | runtime deployment and network operating guide | gateway/channel/tunnel/network runtime behavior changes |
|
||||
| `docs/ops/proxy-agent-playbook.md` | agent-operable proxy runbook and rollback recipes | proxy scope/selector/tooling behavior changes |
|
||||
|
||||
## PR Definition of Ready (DoR)
|
||||
|
||||
@@ -237,7 +252,7 @@ Before requesting review, ensure all of the following are true:
|
||||
- Tests/fixtures/examples use neutral project-scoped wording (no identity-specific or first-person phrasing).
|
||||
- If identity-like wording is required, use ZeroClaw-centric labels only (for example: `ZeroClawAgent`, `ZeroClawOperator`, `zeroclaw_user`).
|
||||
- If docs were changed, update `docs/README.md` navigation and reciprocal links with related docs.
|
||||
- If a new operational doc was added, start from `docs/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
|
||||
- If a new operational doc was added, start from `docs/contributing/doc-template.md` and keep risk/rollback/troubleshooting sections where applicable.
|
||||
- Linked issue (or rationale for no issue) is included.
|
||||
|
||||
## PR Definition of Done (DoD)
|
||||
@@ -265,9 +280,9 @@ When PR traffic is high (especially with AI-assisted contributions), these rules
|
||||
- **Identity normalization**: when identity traits are unavoidable, use ZeroClaw/project-native roles instead of personal or real-world identities.
|
||||
- **Supersede hygiene**: if your PR replaces an older open PR, add `Supersedes #...` and request maintainers close the outdated one.
|
||||
|
||||
Full maintainer workflow: [`docs/pr-workflow.md`](docs/pr-workflow.md).
|
||||
CI workflow ownership and triage map: [`docs/ci-map.md`](docs/ci-map.md).
|
||||
Reviewer operating checklist: [`docs/reviewer-playbook.md`](docs/reviewer-playbook.md).
|
||||
Full maintainer workflow: [`docs/contributing/pr-workflow.md`](docs/contributing/pr-workflow.md).
|
||||
CI workflow ownership and triage map: [`docs/contributing/ci-map.md`](docs/contributing/ci-map.md).
|
||||
Reviewer operating checklist: [`docs/contributing/reviewer-playbook.md`](docs/contributing/reviewer-playbook.md).
|
||||
|
||||
## Agent Collaboration Guidance
|
||||
|
||||
@@ -544,3 +559,4 @@ Recommended scope keys in commit titles:
|
||||
## License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the MIT License.
|
||||
# Contributing Guide Update
|
||||
|
||||
Generated
+230
-290
File diff suppressed because it is too large
Load Diff
+37
-8
@@ -4,7 +4,7 @@ resolver = "2"
|
||||
|
||||
[package]
|
||||
name = "zeroclaw"
|
||||
version = "0.1.6"
|
||||
version = "0.1.9"
|
||||
edition = "2021"
|
||||
authors = ["theonlyhennygod"]
|
||||
license = "MIT OR Apache-2.0"
|
||||
@@ -21,7 +21,7 @@ clap = { version = "4.5", features = ["derive"] }
|
||||
clap_complete = "4.5"
|
||||
|
||||
# Async runtime - feature-optimized for size
|
||||
tokio = { version = "1.42", default-features = false, features = ["rt-multi-thread", "macros", "time", "net", "io-util", "sync", "process", "io-std", "fs", "signal"] }
|
||||
tokio = { version = "1.50", default-features = false, features = ["rt-multi-thread", "macros", "time", "net", "io-util", "sync", "process", "io-std", "fs", "signal"] }
|
||||
tokio-util = { version = "0.7", default-features = false }
|
||||
tokio-stream = { version = "0.1.18", default-features = false, features = ["fs", "sync"] }
|
||||
|
||||
@@ -34,6 +34,7 @@ matrix-sdk = { version = "0.16", optional = true, default-features = false, feat
|
||||
# Serialization
|
||||
serde = { version = "1.0", default-features = false, features = ["derive"] }
|
||||
serde_json = { version = "1.0", default-features = false, features = ["std"] }
|
||||
serde_ignored = "0.1"
|
||||
|
||||
# Config
|
||||
directories = "6.0"
|
||||
@@ -57,15 +58,18 @@ image = { version = "0.25", default-features = false, features = ["jpeg", "png"]
|
||||
# URL encoding for web search
|
||||
urlencoding = "2.1"
|
||||
|
||||
# HTML to plain text conversion (web_fetch tool)
|
||||
nanohtml2text = "0.2"
|
||||
|
||||
# Optional Rust-native browser automation backend
|
||||
fantoccini = { version = "0.22.0", optional = true, default-features = false, features = ["rustls-tls"] }
|
||||
fantoccini = { version = "0.22.1", optional = true, default-features = false, features = ["rustls-tls"] }
|
||||
|
||||
# Error handling
|
||||
anyhow = "1.0"
|
||||
thiserror = "2.0"
|
||||
|
||||
# UUID generation
|
||||
uuid = { version = "1.11", default-features = false, features = ["v4", "std"] }
|
||||
uuid = { version = "1.22", default-features = false, features = ["v4", "std"] }
|
||||
|
||||
# Authenticated encryption (AEAD) for secret store
|
||||
chacha20poly1305 = "0.10"
|
||||
@@ -81,6 +85,9 @@ rand = "0.10"
|
||||
# serde-big-array for wa-rs storage (large array serialization)
|
||||
serde-big-array = { version = "0.5", optional = true }
|
||||
|
||||
# Portable atomic fallbacks for 32-bit targets (no native 64-bit atomics)
|
||||
portable-atomic = { version = "1", optional = true }
|
||||
|
||||
# Fast mutexes that don't poison on panic
|
||||
parking_lot = "0.12"
|
||||
|
||||
@@ -113,7 +120,7 @@ which = "8.0"
|
||||
# WebSocket client channels (Discord/Lark/DingTalk/Nostr)
|
||||
tokio-tungstenite = { version = "0.28", features = ["rustls-tls-webpki-roots"] }
|
||||
futures-util = { version = "0.3", default-features = false, features = ["sink"] }
|
||||
nostr-sdk = { version = "0.44", default-features = false, features = ["nip04", "nip59"] }
|
||||
nostr-sdk = { version = "0.44", default-features = false, features = ["nip04", "nip59"], optional = true }
|
||||
regex = "1.10"
|
||||
hostname = "0.4.2"
|
||||
rustls = "0.23"
|
||||
@@ -158,6 +165,9 @@ probe-rs = { version = "0.31", optional = true }
|
||||
# PDF extraction for datasheet RAG (optional, enable with --features rag-pdf)
|
||||
pdf-extract = { version = "0.10", optional = true }
|
||||
|
||||
# Terminal QR rendering for WhatsApp Web pairing flow.
|
||||
qrcode = { version = "0.14", optional = true }
|
||||
|
||||
# WhatsApp Web client (wa-rs) — optional, enable with --features whatsapp-web
|
||||
# Uses wa-rs for Bot and Client, wa-rs-core for storage traits, custom rusqlite backend avoids Diesel conflict.
|
||||
wa-rs = { version = "0.2", optional = true, default-features = false }
|
||||
@@ -177,10 +187,12 @@ landlock = { version = "0.4", optional = true }
|
||||
libc = "0.2"
|
||||
|
||||
[features]
|
||||
default = []
|
||||
default = ["channel-nostr"]
|
||||
channel-nostr = ["dep:nostr-sdk"]
|
||||
hardware = ["nusb", "tokio-serial"]
|
||||
channel-matrix = ["dep:matrix-sdk"]
|
||||
channel-lark = ["dep:prost"]
|
||||
channel-feishu = ["channel-lark"] # Alias for Feishu users (Lark and Feishu are the same platform)
|
||||
memory-postgres = ["dep:postgres"]
|
||||
observability-otel = ["dep:opentelemetry", "dep:opentelemetry_sdk", "dep:opentelemetry-otlp"]
|
||||
peripheral-rpi = ["rppal"]
|
||||
@@ -198,7 +210,7 @@ probe = ["dep:probe-rs"]
|
||||
# rag-pdf = PDF ingestion for datasheet RAG
|
||||
rag-pdf = ["dep:pdf-extract"]
|
||||
# whatsapp-web = Native WhatsApp Web client with custom rusqlite storage backend
|
||||
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost"]
|
||||
whatsapp-web = ["dep:wa-rs", "dep:wa-rs-core", "dep:wa-rs-binary", "dep:wa-rs-proto", "dep:wa-rs-ureq-http", "dep:wa-rs-tokio-transport", "dep:serde-big-array", "dep:prost", "dep:qrcode"]
|
||||
|
||||
[profile.release]
|
||||
opt-level = "z" # Optimize for size
|
||||
@@ -222,9 +234,26 @@ strip = true
|
||||
panic = "abort"
|
||||
|
||||
[dev-dependencies]
|
||||
tempfile = "3.14"
|
||||
tempfile = "3.26"
|
||||
criterion = { version = "0.8", features = ["async_tokio"] }
|
||||
wiremock = "0.6"
|
||||
scopeguard = "1.2"
|
||||
|
||||
[[test]]
|
||||
name = "component"
|
||||
path = "tests/test_component.rs"
|
||||
|
||||
[[test]]
|
||||
name = "integration"
|
||||
path = "tests/test_integration.rs"
|
||||
|
||||
[[test]]
|
||||
name = "system"
|
||||
path = "tests/test_system.rs"
|
||||
|
||||
[[test]]
|
||||
name = "live"
|
||||
path = "tests/test_live.rs"
|
||||
|
||||
[[bench]]
|
||||
name = "agent_benchmarks"
|
||||
|
||||
+13
-13
@@ -58,20 +58,20 @@ RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/regist
|
||||
|
||||
# Prepare runtime directory structure and default config inline (no extra stage)
|
||||
RUN mkdir -p /zeroclaw-data/.zeroclaw /zeroclaw-data/workspace && \
|
||||
cat > /zeroclaw-data/.zeroclaw/config.toml <<EOF && \
|
||||
printf '%s\n' \
|
||||
'workspace_dir = "/zeroclaw-data/workspace"' \
|
||||
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
|
||||
'api_key = ""' \
|
||||
'default_provider = "openrouter"' \
|
||||
'default_model = "anthropic/claude-sonnet-4-20250514"' \
|
||||
'default_temperature = 0.7' \
|
||||
'' \
|
||||
'[gateway]' \
|
||||
'port = 42617' \
|
||||
'host = "[::]"' \
|
||||
'allow_public_bind = true' \
|
||||
> /zeroclaw-data/.zeroclaw/config.toml && \
|
||||
chown -R 65534:65534 /zeroclaw-data
|
||||
workspace_dir = "/zeroclaw-data/workspace"
|
||||
config_path = "/zeroclaw-data/.zeroclaw/config.toml"
|
||||
api_key = ""
|
||||
default_provider = "openrouter"
|
||||
default_model = "anthropic/claude-sonnet-4-20250514"
|
||||
default_temperature = 0.7
|
||||
|
||||
[gateway]
|
||||
port = 42617
|
||||
host = "[::]"
|
||||
allow_public_bind = true
|
||||
EOF
|
||||
|
||||
# ── Stage 2: Development Runtime (Debian) ────────────────────
|
||||
FROM debian:trixie-slim@sha256:f6e2cfac5cf956ea044b4bd75e6397b4372ad88fe00908045e9a0d21712ae3ba AS dev
|
||||
|
||||
@@ -17,7 +17,7 @@ License
|
||||
|
||||
This software is available under a dual-license model:
|
||||
|
||||
1. MIT License — see LICENSE
|
||||
1. MIT License — see LICENSE-MIT
|
||||
2. Apache License 2.0 — see LICENSE-APACHE
|
||||
|
||||
You may use either license. Contributors grant rights under both.
|
||||
|
||||
+914
@@ -0,0 +1,914 @@
|
||||
<p align="center" dir="rtl">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
<strong>صفر عبء. صفر تنازلات. 100% Rust. 100% محايد.</strong><br>
|
||||
<strong dir="ltr">⚡️ يعمل على أجهزة بقيمة $10 بأقل من 5MB RAM: ذاكرة أقل بنسبة 99% من OpenClaw وأرخص بنسبة 98% من Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
|
||||
</p>
|
||||
<p align="center" dir="rtl">
|
||||
بني من قبل طلاب وأعضاء مجتمعات هارفارد ومعهد ماساتشوستس للتكنولوجيا وSundai.Club.
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
🌐 <strong>اللغات:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
<a href="#البدء-السريع">البدء السريع</a> |
|
||||
<a href="bootstrap.sh">الإعداد بنقرة واحدة</a> |
|
||||
<a href="docs/README.md">مركز التوثيق</a> |
|
||||
<a href="docs/SUMMARY.md">فهرس التوثيق</a>
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
<strong>الوصول السريع:</strong>
|
||||
<a href="docs/reference/README.md">المرجع</a> ·
|
||||
<a href="docs/operations/README.md">العمليات</a> ·
|
||||
<a href="docs/troubleshooting.md">استكشاف الأخطاء</a> ·
|
||||
<a href="docs/security/README.md">الأمان</a> ·
|
||||
<a href="docs/hardware/README.md">الأجهزة</a> ·
|
||||
<a href="docs/contributing/README.md">المساهمة</a>
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
<strong>بنية تحتية سريعة وخفيفة ومستقلة تمامًا لمساعد الذكاء الاصطناعي</strong><br />
|
||||
انشر في أي مكان. استبدل أي شيء.
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
ZeroClaw هو <strong>نظام تشغيل وقت التشغيل</strong> لعمليات العمل الآلية — بنية تحتية تجرد النماذج والأدوات والذاكرة والتنفيذ لبناء وكلاء مرة واحدة وتشغيلهم في أي مكان.
|
||||
</p>
|
||||
|
||||
<p align="center"><code>بنية قائمة على السمات · وقت تشغيل آمن افتراضيًا · موفر/قناة/أداة قابلة للتبديل · كل شيء قابل للتوصيل</code></p>
|
||||
|
||||
### 📢 الإعلانات
|
||||
|
||||
استخدم هذا الجدول للإشعارات المهمة (تغييرات التوافق، إشعارات الأمان، نوافذ الصيانة، وحجوز الإصدارات).
|
||||
|
||||
| التاريخ (UTC) | المستوى | الإشعار | الإجراء |
|
||||
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 2026-02-19 | _حرج_ | **نحن غير مرتبطين** بـ `openagen/zeroclaw` أو `zeroclaw.org`. نطاق `zeroclaw.org` يشير حاليًا إلى الفرع `openagen/zeroclaw`، وهذا النطاق/المستودع ينتحل شخصية موقعنا/مشروعنا الرسمي. | لا تثق بالمعلومات أو الملفات الثنائية أو جمع التبرعات أو الإعلانات من هذه المصادر. استخدم فقط [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) وحساباتنا الموثقة على وسائل التواصل الاجتماعي. |
|
||||
| 2026-02-21 | _مهم_ | موقعنا الرسمي أصبح متاحًا الآن: [zeroclawlabs.ai](https://zeroclawlabs.ai). شكرًا لصبرك أثناء الانتظار. لا نزال نكتشف محاولات الانتحال: لا تشارك في أي نشاط استثمار/تمويل باسم ZeroClaw إذا لم يتم نشره عبر قنواتنا الرسمية. | استخدم [هذا المستودع](https://github.com/zeroclaw-labs/zeroclaw) كمصدر وحيد للحقيقة. تابع [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21)، [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs)، [Facebook (مجموعة)](https://www.facebook.com/groups/zeroclaw)، [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/)، و[Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) للتحديثات الرسمية. |
|
||||
| 2026-02-19 | _مهم_ | قامت Anthropic بتحديث شروط استخدام المصادقة وبيانات الاعتماد في 2026-02-19. مصادقة OAuth (Free، Pro، Max) حصريًا لـ Claude Code و Claude.ai؛ استخدام رموز Claude Free/Pro/Max OAuth في أي منتج أو أداة أو خدمة أخرى (بما في ذلك Agent SDK) غير مسموح به وقد ينتهك شروط استخدام المستهلك. | يرجى تجنب مؤقتًا تكاملات Claude Code OAuth لمنع أي خسارة محتملة. البند الأصلي: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
|
||||
|
||||
### ✨ الميزات
|
||||
|
||||
- 🏎️ **وقت تشغيل خفيف افتراضيًا:** عمليات سطر الأوامر الشائعة وأوامر الحالة تعمل ضمن مساحة ذاكرة بضع ميغابايت في إصدارات الإنتاج.
|
||||
- 💰 **نشر فعال من حيث التكلفة:** مصمم للوحات منخفضة التكلفة وحالات السحابة الصغيرة بدون تبعيات وقت تشغيل ثقيلة.
|
||||
- ⚡ **بدء تشغيل سريع من البارد:** وقت تشغيل Rust الثنائي الواحد يحافظ على بدء الأوامر والبرامج الخلفية شبه فوري للعمليات اليومية.
|
||||
- 🌍 **بنية محمولة:** سير عمل ثنائي واحد على ARM و x86 و RISC-V مع موفر/قناة/أداة قابلة للتبديل.
|
||||
|
||||
### لماذا تختار الفرق ZeroClaw
|
||||
|
||||
- **خفيف افتراضيًا:** ملف Rust ثنائي صغير، بدء تشغيل سريع، بصمة ذاكرة منخفضة.
|
||||
- **آمن بالتصميم:** الاقتران، الصندوق الرملي الصارم، قوائم السماح الصريحة، نطاق مساحة العمل.
|
||||
- **قابل للتبديل بالكامل:** الأنظمة الأساسية هي سمات (الموفرون، القنوات، الأدوات، الذاكرة، الأنفاق).
|
||||
- **لا قفل للمورد:** دعم موفر متوافق مع OpenAI + نقاط نهاية مخصصة قابلة للتوصيل.
|
||||
|
||||
## لقطة قياس الأداء (ZeroClaw مقابل OpenClaw، قابلة للتكرار)
|
||||
|
||||
قياس أداء سريع على جهاز محلي (macOS arm64، فبراير 2026) مُطبع لأجهزة الحافة بسرعة 0.8 GHz.
|
||||
|
||||
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
|
||||
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
|
||||
| **اللغة** | TypeScript | Python | Go | **Rust** |
|
||||
| **الذاكرة العشوائية** | > 1 غيغابايت | > 100 ميغابايت | < 10 ميغابايت | **< 5 ميغابايت** |
|
||||
| **بدء التشغيل (نواة 0.8 GHz)** | > 500 ثانية | > 30 ثانية | < 1 ثانية | **< 10 ملي ثانية** |
|
||||
| **حجم الملف الثنائي** | ~28 ميغابايت (dist) | N/A (Scripts) | ~8 ميغابايت | **3.4 ميغابايت** |
|
||||
| **التكلفة** | Mac Mini $599 | Linux SBC ~$50 | لوحة Linux $10 | **أي جهاز $10** |
|
||||
|
||||
> ملاحظات: تم قياس نتائج ZeroClaw في إصدارات الإنتاج باستخدام `/usr/bin/time -l`. يتطلب OpenClaw وقت تشغيل Node.js (عادةً ~390 ميغابايت من عبء الذاكرة الإضافي)، بينما يتطلب NanoBot وقت تشغيل Python. PicoClaw و ZeroClaw هما ملفات ثنائية ثابتة. أرقام الذاكرة العشوائية أعلاه هي ذاكرة وقت التشغيل؛ متطلبات التجميع في وقت البناء أعلى.
|
||||
|
||||
<p align="center">
|
||||
<img src="zero-claw.jpeg" alt="مقارنة ZeroClaw مقابل OpenClaw" width="800" />
|
||||
</p>
|
||||
|
||||
### قياس محلي قابل للتكرار
|
||||
|
||||
قد تتغير ادعاءات قياس الأداء مع تطور الكود وسلاسل الأدوات، لذا قم دائمًا بقياس إصدارك الحالي محليًا:
|
||||
|
||||
```bash
|
||||
cargo build --release
|
||||
ls -lh target/release/zeroclaw
|
||||
|
||||
/usr/bin/time -l target/release/zeroclaw --help
|
||||
/usr/bin/time -l target/release/zeroclaw status
|
||||
```
|
||||
|
||||
عينة مثال (macOS arm64، تم قياسها في 18 فبراير 2026):
|
||||
|
||||
- حجم الملف الثنائي للإصدار: `8.8M`
|
||||
- `zeroclaw --help`: وقت حقيقي حوالي `0.02s`، بصمة ذاكرة قصوى ~`3.9 ميغابايت`
|
||||
- `zeroclaw status`: وقت حقيقي حوالي `0.01s`، بصمة ذاكرة قصوى ~`4.1 ميغابايت`
|
||||
|
||||
## المتطلبات الأساسية
|
||||
|
||||
<details>
|
||||
<summary><strong>Windows</strong></summary>
|
||||
|
||||
### Windows — مطلوب
|
||||
|
||||
1. **Visual Studio Build Tools** (يوفر رابط MSVC و Windows SDK):
|
||||
|
||||
```powershell
|
||||
winget install Microsoft.VisualStudio.2022.BuildTools
|
||||
```
|
||||
|
||||
أثناء التثبيت (أو عبر Visual Studio Installer)، حدد عبء عمل **"تطوير سطح المكتب باستخدام C++"**.
|
||||
|
||||
2. **سلسلة أدوات Rust:**
|
||||
|
||||
```powershell
|
||||
winget install Rustlang.Rustup
|
||||
```
|
||||
|
||||
بعد التثبيت، افتح محطة طرفية جديدة وقم بتشغيل `rustup default stable` للتأكد من أن سلسلة الأدوات المستقرة نشطة.
|
||||
|
||||
3. **تحقق** من أن كلاهما يعمل:
|
||||
```powershell
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Windows — اختياري
|
||||
|
||||
- **Docker Desktop** — مطلوب فقط إذا كنت تستخدم [وقت تشغيل Docker المعزول](#دعم-وقت-التشغيل-الحالي) (`runtime.kind = "docker"`). قم بالتثبيت عبر `winget install Docker.DockerDesktop`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Linux / macOS</strong></summary>
|
||||
|
||||
### Linux / macOS — مطلوب
|
||||
|
||||
1. **أدوات البناء الأساسية:**
|
||||
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
|
||||
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
|
||||
- **macOS:** قم بتثبيت Xcode Command Line Tools: `xcode-select --install`
|
||||
|
||||
2. **سلسلة أدوات Rust:**
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
راجع [rustup.rs](https://rustup.rs) للتفاصيل.
|
||||
|
||||
3. **تحقق:**
|
||||
```bash
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Linux / macOS — اختياري
|
||||
|
||||
- **Docker** — مطلوب فقط إذا كنت تستخدم [وقت تشغيل Docker المعزول](#دعم-وقت-التشغيل-الحالي) (`runtime.kind = "docker"`).
|
||||
- **Linux (Debian/Ubuntu):** راجع [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
|
||||
- **Linux (Fedora/RHEL):** راجع [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
|
||||
- **macOS:** قم بتثبيت Docker Desktop عبر [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
|
||||
|
||||
</details>
|
||||
|
||||
## البدء السريع
|
||||
|
||||
### الخيار 1: الإعداد الآلي (موصى به)
|
||||
|
||||
يقوم نص `bootstrap.sh` بتثبيت Rust ونسخ ZeroClaw وتجميعه وإعداد بيئة التطوير الأولية الخاصة بك:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
|
||||
```
|
||||
|
||||
سيقوم هذا بـ:
|
||||
|
||||
1. تثبيت Rust (إذا لم يكن موجودًا)
|
||||
2. نسخ مستودع ZeroClaw
|
||||
3. تجميع ZeroClaw في وضع الإصدار
|
||||
4. تثبيت `zeroclaw` في `~/.cargo/bin/`
|
||||
5. إنشاء هيكل مساحة العمل الافتراضية في `~/.zeroclaw/workspace/`
|
||||
6. إنشاء ملف تكوين بدء التشغيل `~/.zeroclaw/workspace/config.toml`
|
||||
|
||||
بعد التمهيد، أعد تحميل shell الخاص بك أو قم بتشغيل `source ~/.cargo/env` لاستخدام أمر `zeroclaw` عالميًا.
|
||||
|
||||
### الخيار 2: التثبيت اليدوي
|
||||
|
||||
<details>
|
||||
<summary><strong>انقر لرؤية خطوات التثبيت اليدوي</strong></summary>
|
||||
|
||||
```bash
|
||||
# 1. نسخ المستودع
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# 2. التجميع في وضع الإصدار
|
||||
cargo build --release --locked
|
||||
|
||||
# 3. تثبيت الملف الثنائي
|
||||
cargo install --path . --locked
|
||||
|
||||
# 4. تهيئة مساحة العمل
|
||||
zeroclaw init
|
||||
|
||||
# 5. التحقق من التثبيت
|
||||
zeroclaw --version
|
||||
zeroclaw status
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### بعد التثبيت
|
||||
|
||||
بمجرد التثبيت (عبر التمهيد أو يدويًا)، يجب أن ترى:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/
|
||||
├── config.toml # التكوين الرئيسي
|
||||
├── .pairing # أسرار الاقتران (تُنشأ عند التشغيل الأول)
|
||||
├── logs/ # سجلات البرنامج الخفي/الوكيل
|
||||
├── skills/ # المهارات المخصصة
|
||||
└── memory/ # تخزين سياق المحادثة
|
||||
```
|
||||
|
||||
**الخطوات التالية:**
|
||||
|
||||
1. قم بتكوين موفري الذكاء الاصطناعي الخاص بك في `~/.zeroclaw/workspace/config.toml`
|
||||
2. تحقق من [مرجع التكوين](docs/config-reference.md) للخيارات المتقدمة
|
||||
3. ابدأ الوكيل: `zeroclaw agent start`
|
||||
4. اختبر عبر قناتك المفضلة (راجع [مرجع القنوات](docs/channels-reference.md))
|
||||
|
||||
## التكوين
|
||||
|
||||
قم بتحرير `~/.zeroclaw/workspace/config.toml` لتكوين الموفرون والقنوات وسلوك النظام.
|
||||
|
||||
### مرجع التكوين السريع
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@bot:matrix.org"
|
||||
password = "..."
|
||||
|
||||
[memory]
|
||||
kind = "markdown" # أو "sqlite" أو "none"
|
||||
|
||||
[runtime]
|
||||
kind = "native" # أو "docker" (يتطلب Docker)
|
||||
```
|
||||
|
||||
**مستندات المرجع الكاملة:**
|
||||
|
||||
- [مرجع التكوين](docs/config-reference.md) — جميع الإعدادات والتحقق والقيم الافتراضية
|
||||
- [مرجع الموفرون](docs/providers-reference.md) — تكوينات محددة لموفري الذكاء الاصطناعي
|
||||
- [مرجع القنوات](docs/channels-reference.md) — Telegram و Matrix و Slack و Discord والمزيد
|
||||
- [العمليات](docs/operations-runbook.md) — المراقبة في الإنتاج وتدوير الأسرار والتوسع
|
||||
|
||||
### دعم وقت التشغيل الحالي
|
||||
|
||||
يدعم ZeroClaw واجهتين خلفيتين لتنفيذ الكود:
|
||||
|
||||
- **`native`** (افتراضي) — تنفيذ العملية المباشر، المسار الأسرع، مثالي للبيئات الموثوقة
|
||||
- **`docker`** — عزل الحاوية الكامل، سياسات الأمان المحصنة، يتطلب Docker
|
||||
|
||||
استخدم `runtime.kind = "docker"` إذا كنت بحاجة إلى صندوق رملي صارم أو عزل الشبكة. راجع [مرجع التكوين](docs/config-reference.md#runtime) للتفاصيل الكاملة.
|
||||
|
||||
## الأوامر
|
||||
|
||||
```bash
|
||||
# إدارة مساحة العمل
|
||||
zeroclaw init # تهيئة مساحة عمل جديدة
|
||||
zeroclaw status # عرض حالة البرنامج الخفي/الوكيل
|
||||
zeroclaw config validate # التحقق من بنية وقيم config.toml
|
||||
|
||||
# إدارة البرنامج الخفي
|
||||
zeroclaw daemon start # بدء البرنامج الخفي في الخلفية
|
||||
zeroclaw daemon stop # إيقاف البرنامج الخفي قيد التشغيل
|
||||
zeroclaw daemon restart # إعادة تشغيل البرنامج الخفي (إعادة تحميل التكوين)
|
||||
zeroclaw daemon logs # عرض سجلات البرنامج الخفي
|
||||
|
||||
# إدارة الوكيل
|
||||
zeroclaw agent start # بدء الوكيل (يتطلب تشغيل البرنامج الخفي)
|
||||
zeroclaw agent stop # إيقاف الوكيل
|
||||
zeroclaw agent restart # إعادة تشغيل الوكيل (إعادة تحميل التكوين)
|
||||
|
||||
# عمليات الاقتران
|
||||
zeroclaw pairing init # إنشاء سر اقتران جديد
|
||||
zeroclaw pairing rotate # تدوير سر الاقتران الحالي
|
||||
|
||||
# الأنفاق (للتعرض العام)
|
||||
zeroclaw tunnel start # بدء نفق إلى البرنامج الخفي المحلي
|
||||
zeroclaw tunnel stop # إيقاف النفق النشط
|
||||
|
||||
# التشخيص
|
||||
zeroclaw doctor # تشغيل فحوصات صحة النظام
|
||||
zeroclaw version # عرض الإصدار ومعلومات البناء
|
||||
```
|
||||
|
||||
راجع [مرجع الأوامر](docs/commands-reference.md) للخيارات والأمثلة الكاملة.
|
||||
|
||||
## البنية
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ القنوات (سمة) │
|
||||
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ منسق الوكيل │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ توجيه │ │ السياق │ │ التنفيذ │ │
|
||||
│ │ الرسائل │ │ الذاكرة │ │ الأداة │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ الموفرون │ │ الذاكرة │ │ الأدوات │
|
||||
│ (سمة) │ │ (سمة) │ │ (سمة) │
|
||||
├──────────────┤ ├──────────────┤ ├──────────────┤
|
||||
│ Anthropic │ │ Markdown │ │ Filesystem │
|
||||
│ OpenAI │ │ SQLite │ │ Bash │
|
||||
│ Gemini │ │ None │ │ Web Fetch │
|
||||
│ Ollama │ │ Custom │ │ Custom │
|
||||
│ Custom │ └──────────────┘ └──────────────┘
|
||||
└──────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ وقت التشغيل (سمة) │
|
||||
│ Native │ Docker │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**المبادئ الأساسية:**
|
||||
|
||||
- كل شيء هو **سمة** — الموفرون والقنوات والأدوات والذاكرة والأنفاق
|
||||
- القنوات تستدعي المنسق؛ المنسق يستدعي الموفرون + الأدوات
|
||||
- نظام الذاكرة يدير سياق المحادثة (markdown أو SQLite أو لا شيء)
|
||||
- وقت التشغيل يجرد تنفيذ الكود (أصلي أو Docker)
|
||||
- لا قفل للمورد — استبدل Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama بدون تغييرات في الكود
|
||||
|
||||
راجع [توثيق البنية](docs/architecture.svg) للرسوم البيانية التفصيلية وتفاصيل التنفيذ.
|
||||
|
||||
## الأمثلة
|
||||
|
||||
### بوت Telegram
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
allowed_users = [987654321] # معرف مستخدم Telegram الخاص بك
|
||||
```
|
||||
|
||||
ابدأ البرنامج الخفي + الوكيل، ثم أرسل رسالة إلى بوتك على Telegram:
|
||||
|
||||
```
|
||||
/start
|
||||
مرحباً! هل يمكنك مساعدتي في كتابة نص Python؟
|
||||
```
|
||||
|
||||
يستجيب البوت بكود مُنشأ بالذكاء الاصطناعي، وينفذ الأدوات إذا طُلب، ويحافظ على سياق المحادثة.
|
||||
|
||||
### Matrix (تشفير من طرف إلى طرف)
|
||||
|
||||
```toml
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@zeroclaw:matrix.org"
|
||||
password = "..."
|
||||
device_name = "zeroclaw-prod"
|
||||
e2ee_enabled = true
|
||||
```
|
||||
|
||||
ادعُ `@zeroclaw:matrix.org` إلى غرفة مشفرة، وسيستجيب البوت بتشفير كامل. راجع [دليل Matrix E2EE](docs/matrix-e2ee-guide.md) لإعداد التحقق من الجهاز.
|
||||
|
||||
### متعدد الموفرون
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
enabled = true
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
enabled = true
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[orchestrator]
|
||||
default_provider = "anthropic"
|
||||
fallback_providers = ["openai"] # التبديل عند خطأ المورد
|
||||
```
|
||||
|
||||
إذا فشل Anthropic أو وصل إلى حد السرعة، يتبادل المنسق تلقائيًا إلى OpenAI.
|
||||
|
||||
### ذاكرة مخصصة
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "sqlite"
|
||||
path = "~/.zeroclaw/workspace/memory/conversations.db"
|
||||
retention_days = 90 # حذف تلقائي بعد 90 يومًا
|
||||
```
|
||||
|
||||
أو استخدم Markdown للتخزين القابل للقراءة البشرية:
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "markdown"
|
||||
path = "~/.zeroclaw/workspace/memory/"
|
||||
```
|
||||
|
||||
راجع [مرجع التكوين](docs/config-reference.md#memory) لجميع خيارات الذاكرة.
|
||||
|
||||
## دعم الموفرون
|
||||
|
||||
| المورد | الحالة | مفتاح API | النماذج المثال |
|
||||
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
|
||||
| **Anthropic** | ✅ مستقر | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
|
||||
| **OpenAI** | ✅ مستقر | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
|
||||
| **Google Gemini** | ✅ مستقر | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
|
||||
| **Ollama** | ✅ مستقر | N/A (محلي) | `llama3.3`, `qwen2.5`, `phi4` |
|
||||
| **Cerebras** | ✅ مستقر | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
|
||||
| **Groq** | ✅ مستقر | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
|
||||
| **Mistral** | 🚧 مخطط | `MISTRAL_API_KEY` | TBD |
|
||||
| **Cohere** | 🚧 مخطط | `COHERE_API_KEY` | TBD |
|
||||
|
||||
### نقاط النهاية المخصصة
|
||||
|
||||
يدعم ZeroClaw نقاط النهاية المتوافقة مع OpenAI:
|
||||
|
||||
```toml
|
||||
[providers.custom]
|
||||
enabled = true
|
||||
api_key = "..."
|
||||
base_url = "https://api.your-llm-provider.com/v1"
|
||||
model = "your-model-name"
|
||||
```
|
||||
|
||||
مثال: استخدم [LiteLLM](https://github.com/BerriAI/litellm) كوكيل للوصول إلى أي LLM عبر واجهة OpenAI.
|
||||
|
||||
راجع [مرجع الموفرون](docs/providers-reference.md) لتفاصيل التكوين الكاملة.
|
||||
|
||||
## دعم القنوات
|
||||
|
||||
| القناة | الحالة | المصادقة | ملاحظات |
|
||||
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
|
||||
| **Telegram** | ✅ مستقر | رمز البوت | دعم كامل بما في ذلك الملفات والصور والأزرار المضمنة |
|
||||
| **Matrix** | ✅ مستقر | كلمة المرور أو الرمز | دعم E2EE مع التحقق من الجهاز |
|
||||
| **Slack** | 🚧 مخطط | OAuth أو رمز البوت | يتطلب الوصول إلى مساحة العمل |
|
||||
| **Discord** | 🚧 مخطط | رمز البوت | يتطلب أذونات النقابة |
|
||||
| **WhatsApp** | 🚧 مخطط | Twilio أو API الرسمية | يتطلب حساب تجاري |
|
||||
| **CLI** | ✅ مستقر | لا شيء | واجهة محادثة مباشرة |
|
||||
| **Web** | 🚧 مخطط | مفتاح API أو OAuth | واجهة دردشة قائمة على المتصفح |
|
||||
|
||||
راجع [مرجع القنوات](docs/channels-reference.md) لتعليمات التكوين الكاملة.
|
||||
|
||||
## دعم الأدوات
|
||||
|
||||
يوفر ZeroClaw أدوات مدمجة لتنفيذ الكود والوصول إلى نظام الملفات واسترجاع الويب:
|
||||
|
||||
| الأداة | الوصف | وقت التشغيل المطلوب |
|
||||
| -------------------- | --------------------------- | ----------------------------- |
|
||||
| **bash** | ينفذ أوامر الصدفة | أصلي أو Docker |
|
||||
| **python** | ينفذ نصوص Python | Python 3.8+ (أصلي) أو Docker |
|
||||
| **javascript** | ينفذ كود Node.js | Node.js 18+ (أصلي) أو Docker |
|
||||
| **filesystem_read** | يقرأ الملفات | أصلي أو Docker |
|
||||
| **filesystem_write** | يكتب الملفات | أصلي أو Docker |
|
||||
| **web_fetch** | يجلب محتوى الويب | أصلي أو Docker |
|
||||
|
||||
### أمان التنفيذ
|
||||
|
||||
- **وقت التشغيل الأصلي** — يعمل كعملية مستخدم البرنامج الخفي، وصول كامل لنظام الملفات
|
||||
- **وقت تشغيل Docker** — عزل حاوية كامل، أنظمة ملفات وشبكات منفصلة
|
||||
|
||||
قم بتكوين سياسة التنفيذ في `config.toml`:
|
||||
|
||||
```toml
|
||||
[runtime]
|
||||
kind = "docker"
|
||||
allowed_tools = ["bash", "python", "filesystem_read"] # قائمة سماح صريحة
|
||||
```
|
||||
|
||||
راجع [مرجع التكوين](docs/config-reference.md#runtime) لخيارات الأمان الكاملة.
|
||||
|
||||
## النشر
|
||||
|
||||
### النشر المحلي (التطوير)
|
||||
|
||||
```bash
|
||||
zeroclaw daemon start
|
||||
zeroclaw agent start
|
||||
```
|
||||
|
||||
### نشر الخادم (الإنتاج)
|
||||
|
||||
استخدم systemd لإدارة البرنامج الخفي والوكيل كخدمات:
|
||||
|
||||
```bash
|
||||
# تثبيت الملف الثنائي
|
||||
cargo install --path . --locked
|
||||
|
||||
# تكوين مساحة العمل
|
||||
zeroclaw init
|
||||
|
||||
# إنشاء ملفات خدمة systemd
|
||||
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
|
||||
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
|
||||
|
||||
# تمكين وبدء الخدمات
|
||||
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
|
||||
sudo systemctl start zeroclaw-daemon zeroclaw-agent
|
||||
|
||||
# التحقق من الحالة
|
||||
sudo systemctl status zeroclaw-daemon
|
||||
sudo systemctl status zeroclaw-agent
|
||||
```
|
||||
|
||||
راجع [دليل نشر الشبكة](docs/network-deployment.md) لتعليمات نشر الإنتاج الكاملة.
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# بناء الصورة
|
||||
docker build -t zeroclaw:latest .
|
||||
|
||||
# تشغيل الحاوية
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-v ~/.zeroclaw/workspace:/workspace \
|
||||
-e ANTHROPIC_API_KEY=sk-ant-... \
|
||||
zeroclaw:latest
|
||||
```
|
||||
|
||||
راجع [`Dockerfile`](Dockerfile) لتفاصيل البناء وخيارات التكوين.
|
||||
|
||||
### أجهزة الحافة
|
||||
|
||||
تم تصميم ZeroClaw للعمل على أجهزة منخفضة الطاقة:
|
||||
|
||||
- **Raspberry Pi Zero 2 W** — ~512 ميغابايت ذاكرة عشوائية، نواة ARMv8 واحدة، < $5 تكلفة الأجهزة
|
||||
- **Raspberry Pi 4/5** — 1 غيغابايت+ ذاكرة عشوائية، متعدد النوى، مثالي لأحمال العمل المتزامنة
|
||||
- **Orange Pi Zero 2** — ~512 ميغابايت ذاكرة عشوائية، رباعي النواة ARMv8، تكلفة منخفضة جدًا
|
||||
- **أجهزة SBCs x86 (Intel N100)** — 4-8 غيغابايت ذاكرة عشوائية، بناء سريع، دعم Docker أصلي
|
||||
|
||||
راجع [دليل الأجهزة](docs/hardware/README.md) لتعليمات الإعداد الخاصة بالجهاز.
|
||||
|
||||
## الأنفاق (التعرض العام)
|
||||
|
||||
اعرض البرنامج الخفي ZeroClaw المحلي الخاص بك للشبكة العامة عبر أنفاق آمنة:
|
||||
|
||||
```bash
|
||||
zeroclaw tunnel start --provider cloudflare
|
||||
```
|
||||
|
||||
موفرو الأنفاق المدعومون:
|
||||
|
||||
- **Cloudflare Tunnel** — HTTPS مجاني، لا تعرض للمنافذ، دعم متعدد المجالات
|
||||
- **Ngrok** — إعداد سريع، مجالات مخصصة (خطة مدفوعة)
|
||||
- **Tailscale** — شبكة شبكية خاصة، لا منفذ عام
|
||||
|
||||
راجع [مرجع التكوين](docs/config-reference.md#tunnel) لخيارات التكوين الكاملة.
|
||||
|
||||
## الأمان
|
||||
|
||||
ينفذ ZeroClaw طبقات متعددة من الأمان:
|
||||
|
||||
### الاقتران
|
||||
|
||||
يُنشئ البرنامج الخفي سر اقتران عند التشغيل الأول مخزن في `~/.zeroclaw/workspace/.pairing`. يجب على العملاء (الوكيل، CLI) تقديم هذا السر للاتصال.
|
||||
|
||||
```bash
|
||||
zeroclaw pairing rotate # يُنشئ سرًا جديدًا ويبطل القديم
|
||||
```
|
||||
|
||||
### الصندوق الرملي
|
||||
|
||||
- **وقت تشغيل Docker** — عزل حاوية كامل مع أنظمة ملفات وشبكات منفصلة
|
||||
- **وقت التشغيل الأصلي** — يعمل كعملية مستخدم، محدد النطاق في مساحة العمل افتراضيًا
|
||||
|
||||
### قوائم السماح
|
||||
|
||||
يمكن للقنوات تقييد الوصول حسب معرف المستخدم:
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
allowed_users = [123456789, 987654321] # قائمة سماح صريحة
|
||||
```
|
||||
|
||||
### التشفير
|
||||
|
||||
- **Matrix E2EE** — تشفير من طرف إلى طرف كامل مع التحقق من الجهاز
|
||||
- **نقل TLS** — جميع حركة API والنفق تستخدم HTTPS/TLS
|
||||
|
||||
راجع [توثيق الأمان](docs/security/README.md) للسياسات والممارسات الكاملة.
|
||||
|
||||
## إمكانية الملاحظة
|
||||
|
||||
يسجل ZeroClaw في `~/.zeroclaw/workspace/logs/` افتراضيًا. يتم تخزين السجلات حسب المكون:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/logs/
|
||||
├── daemon.log # سجلات البرنامج الخفي (بدء التشغيل، طلبات API، الأخطاء)
|
||||
├── agent.log # سجلات الوكيل (توجيه الرسائل، تنفيذ الأدوات)
|
||||
├── telegram.log # سجلات خاصة بالقناة (إذا مُكنت)
|
||||
└── matrix.log # سجلات خاصة بالقناة (إذا مُكنت)
|
||||
```
|
||||
|
||||
### تكوين التسجيل
|
||||
|
||||
```toml
|
||||
[logging]
|
||||
level = "info" # debug، info، warn، error
|
||||
path = "~/.zeroclaw/workspace/logs/"
|
||||
rotation = "daily" # يومي، ساعي، حجم
|
||||
max_size_mb = 100 # للتدوير القائم على الحجم
|
||||
retention_days = 30 # حذف تلقائي بعد N يومًا
|
||||
```
|
||||
|
||||
راجع [مرجع التكوين](docs/config-reference.md#logging) لجميع خيارات التسجيل.
|
||||
|
||||
### المقاييس (مخطط)
|
||||
|
||||
دعم مقاييس Prometheus لمراقبة الإنتاج قريبًا. التتبع في [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
|
||||
|
||||
## المهارات
|
||||
|
||||
يدعم ZeroClaw المهارات المخصصة — وحدات قابلة لإعادة الاستخدام توسع قدرات النظام.
|
||||
|
||||
### تعريف المهارة
|
||||
|
||||
يتم تخزين المهارات في `~/.zeroclaw/workspace/skills/<skill-name>/` بهذا الهيكل:
|
||||
|
||||
```
|
||||
skills/
|
||||
└── my-skill/
|
||||
├── skill.toml # بيانات المهارة (الاسم، الوصف، التبعيات)
|
||||
├── prompt.md # موجه النظام للذكاء الاصطناعي
|
||||
└── tools/ # أدوات مخصصة اختيارية
|
||||
└── my_tool.py
|
||||
```
|
||||
|
||||
### مثال المهارة
|
||||
|
||||
```toml
|
||||
# skills/web-research/skill.toml
|
||||
[skill]
|
||||
name = "web-research"
|
||||
description = "يبحث في الويب ويلخص النتائج"
|
||||
version = "1.0.0"
|
||||
|
||||
[dependencies]
|
||||
tools = ["web_fetch", "bash"]
|
||||
```
|
||||
|
||||
```markdown
|
||||
<!-- skills/web-research/prompt.md -->
|
||||
|
||||
أنت مساعد بحث. عند طلب البحث عن شيء ما:
|
||||
|
||||
1. استخدم web_fetch لاسترجاع المحتوى
|
||||
2. لخص النتائج بتنسيق سهل القراءة
|
||||
3. استشهد بالمصادر مع عناوين URL
|
||||
```
|
||||
|
||||
### استخدام المهارات
|
||||
|
||||
يتم تحميل المهارات تلقائيًا عند بدء تشغيل الوكيل. أشر إليها بالاسم في المحادثات:
|
||||
|
||||
```
|
||||
المستخدم: استخدم مهارة البحث على الويب للعثور على أخبار الذكاء الاصطناعي الأخيرة
|
||||
البوت: [يحمل مهارة البحث على الويب، ينفذ web_fetch، يلخص النتائج]
|
||||
```
|
||||
|
||||
راجع قسم [المهارات](#المهارات) لتعليمات إنشاء المهارات الكاملة.
|
||||
|
||||
## المهارات المفتوحة
|
||||
|
||||
يدعم ZeroClaw [Open Skills](https://github.com/openagents-com/open-skills) — نظام معياري ومحايد للمورد لتوسيع قدرات وكلاء الذكاء الاصطناعي.
|
||||
|
||||
### تمكين المهارات المفتوحة
|
||||
|
||||
```toml
|
||||
[skills]
|
||||
open_skills_enabled = true
|
||||
# open_skills_dir = "/path/to/open-skills" # اختياري
|
||||
```
|
||||
|
||||
يمكنك أيضًا التجاوز في وقت التشغيل باستخدام `ZEROCLAW_OPEN_SKILLS_ENABLED` و `ZEROCLAW_OPEN_SKILLS_DIR`.
|
||||
|
||||
## التطوير
|
||||
|
||||
```bash
|
||||
cargo build # بناء التطوير
|
||||
cargo build --release # بناء الإصدار (codegen-units=1، يعمل على جميع الأجهزة بما في ذلك Raspberry Pi)
|
||||
cargo build --profile release-fast # بناء أسرع (codegen-units=8، يتطلب 16 غيغابايت+ ذاكرة عشوائية)
|
||||
cargo test # تشغيل مجموعة الاختبار الكاملة
|
||||
cargo clippy --locked --all-targets -- -D clippy::correctness
|
||||
cargo fmt # تنسيق
|
||||
|
||||
# تشغيل معيار مقارنة SQLite مقابل Markdown
|
||||
cargo test --test memory_comparison -- --nocapture
|
||||
```
|
||||
|
||||
### خطاف ما قبل الدفع
|
||||
|
||||
يقوم خطاف git بتشغيل `cargo fmt --check` و `cargo clippy -- -D warnings` و `cargo test` قبل كل دفع. قم بتمكينه مرة واحدة:
|
||||
|
||||
```bash
|
||||
git config core.hooksPath .githooks
|
||||
```
|
||||
|
||||
### استكشاف أخطاء البناء وإصلاحها (أخطاء OpenSSL على Linux)
|
||||
|
||||
إذا واجهت خطأ بناء `openssl-sys`، قم بمزامنة التبعيات وأعد التجميع باستخدام ملف قفل المستودع:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
cargo build --release --locked
|
||||
cargo install --path . --force --locked
|
||||
```
|
||||
|
||||
تم تكوين ZeroClaw لاستخدام `rustls` لتبعيات HTTP/TLS؛ `--locked` يحافظ على الرسم البياني العابر حتمي في البيئات النظيفة.
|
||||
|
||||
لتخطي الخطاف عندما تحتاج إلى دفع سريع أثناء التطوير:
|
||||
|
||||
```bash
|
||||
git push --no-verify
|
||||
```
|
||||
|
||||
## التعاون والتوثيق
|
||||
|
||||
ابدأ بمركز التوثيق لخريطة قائمة على المهام:
|
||||
|
||||
- مركز التوثيق: [`docs/README.md`](docs/README.md)
|
||||
- فهرس التوثيق الموحد: [`docs/SUMMARY.md`](docs/SUMMARY.md)
|
||||
- مرجع الأوامر: [`docs/commands-reference.md`](docs/commands-reference.md)
|
||||
- مرجع التكوين: [`docs/config-reference.md`](docs/config-reference.md)
|
||||
- مرجع الموفرون: [`docs/providers-reference.md`](docs/providers-reference.md)
|
||||
- مرجع القنوات: [`docs/channels-reference.md`](docs/channels-reference.md)
|
||||
- دليل العمليات: [`docs/operations-runbook.md`](docs/operations-runbook.md)
|
||||
- استكشاف الأخطاء: [`docs/troubleshooting.md`](docs/troubleshooting.md)
|
||||
- مخزون/تصنيف التوثيق: [`docs/docs-inventory.md`](docs/docs-inventory.md)
|
||||
- لقطة فرز PR/المشكلة (اعتبارًا من 18 فبراير 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
|
||||
|
||||
مراجع التعاون الرئيسية:
|
||||
|
||||
- مركز التوثيق: [docs/README.md](docs/README.md)
|
||||
- قالب التوثيق: [docs/doc-template.md](docs/doc-template.md)
|
||||
- قائمة تغيير التوثيق: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- مرجع تكوين القنوات: [docs/channels-reference.md](docs/channels-reference.md)
|
||||
- عمليات غرف Matrix المشفرة: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
|
||||
- دليل المساهمة: [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- سياسة سير عمل PR: [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- دليل المراجع (الفرز + المراجعة العميقة): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- خريطة الملكية وفرز CI: [docs/ci-map.md](docs/ci-map.md)
|
||||
- سياسة الإفصاح الأمني: [SECURITY.md](SECURITY.md)
|
||||
|
||||
للنشر وعمليات وقت التشغيل:
|
||||
|
||||
- دليل نشر الشبكة: [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- دليل وكيل الوكيل: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
|
||||
## دعم ZeroClaw
|
||||
|
||||
إذا كان ZeroClaw يساعد عملك وترغب في دعم التطوير المستمر، يمكنك التبرع هنا:
|
||||
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="اشترِ لي قهوة" /></a>
|
||||
|
||||
### 🙏 شكر خاص
|
||||
|
||||
شكر خالص للمجتمعات والمؤسسات التي تلهم وتغذي هذا العمل مفتوح المصدر:
|
||||
|
||||
- **جامعة هارفارد** — لتعزيز الفضول الفكري ودفع حدود ما هو ممكن.
|
||||
- **MIT** — للدفاع عن المعرفة المفتوحة والمصدر المفتوح والاعتقاد بأن التكنولوجيا يجب أن تكون متاحة للجميع.
|
||||
- **Sundai Club** — للمجتمع والطاقة والإرادة الدؤوبة لبناء أشياء مهمة.
|
||||
- **العالم وما بعده** 🌍✨ — لكل مساهم وحالم وباني هناك يجعل المصدر المفتوح قوة للخير. هذا من أجلك.
|
||||
|
||||
نحن نبني في المصدر المفتوح لأن أفضل الأفكار تأتي من كل مكان. إذا كنت تقرأ هذا، فأنت جزء منه. مرحبًا. 🦀❤️
|
||||
|
||||
## ⚠️ المستودع الرسمي وتحذير الانتحال
|
||||
|
||||
**هذا هو مستودع ZeroClaw الرسمي الوحيد:**
|
||||
|
||||
> <https://github.com/zeroclaw-labs/zeroclaw>
|
||||
|
||||
أي مستودع أو منظمة أو نطاق أو حزمة آخر يدعي أنه "ZeroClaw" أو يلمح إلى الارتباط بـ ZeroClaw Labs هو **غير مصرح به وغير مرتبط بهذا المشروع**. سيتم إدراج الفروع غير المصرح بها المعروفة في [TRADEMARK.md](TRADEMARK.md).
|
||||
|
||||
إذا واجهت انتحالًا أو سوء استخدام للعلامة التجارية، يرجى [فتح مشكلة](https://github.com/zeroclaw-labs/zeroclaw/issues).
|
||||
|
||||
---
|
||||
|
||||
## الترخيص
|
||||
|
||||
ZeroClaw مرخص بشكل مزدوج لأقصى قدر من الانفتاح وحماية المساهمين:
|
||||
|
||||
| الترخيص | حالات الاستخدام |
|
||||
| ---------------------------- | ------------------------------------------------------------ |
|
||||
| [MIT](LICENSE-MIT) | مفتوح المصدر، البحث، الأكاديمي، الاستخدام الشخصي |
|
||||
| [Apache 2.0](LICENSE-APACHE) | حماية براءات الاختراع، المؤسسي، النشر التجاري |
|
||||
|
||||
يمكنك اختيار أي من الترخيصين. **يمنح المساهمون تلقائيًا حقوقًا بموجب كليهما** — راجع [CLA.md](CLA.md) لاتفاقية المساهم الكاملة.
|
||||
|
||||
### العلامة التجارية
|
||||
|
||||
اسم **ZeroClaw** والشعار علامتان تجاريتان مسجلتان لـ ZeroClaw Labs. لا يمنح هذا الترخيص الإذن باستخدامهما للإيحاء بالموافقة أو الارتباط. راجع [TRADEMARK.md](TRADEMARK.md) للاستخدامات المسموح بها والمحظورة.
|
||||
|
||||
### حماية المساهمين
|
||||
|
||||
- **تحتفظ بحقوق النشر** لمساهماتك
|
||||
- **منح براءة الاختراع** (Apache 2.0) يحميك من مطالبات براءات الاختراع من مساهمين آخرين
|
||||
- يتم **نسب مساهماتك بشكل دائم** في تاريخ الالتزامات و [NOTICE](NOTICE)
|
||||
- لا يتم نقل حقوق العلامة التجارية من خلال المساهمة
|
||||
|
||||
## المساهمة
|
||||
|
||||
راجع [CONTRIBUTING.md](CONTRIBUTING.md) و [CLA.md](CLA.md). قم بتنفيذ سمة، أرسل PR:
|
||||
|
||||
- دليل سير عمل CI: [docs/ci-map.md](docs/ci-map.md)
|
||||
- `Provider` جديد ← `src/providers/`
|
||||
- `Channel` جديد ← `src/channels/`
|
||||
- `Observer` جديد ← `src/observability/`
|
||||
- `Tool` جديد ← `src/tools/`
|
||||
- `Memory` جديدة ← `src/memory/`
|
||||
- `Tunnel` جديد ← `src/tunnel/`
|
||||
- `Skill` جديدة ← `~/.zeroclaw/workspace/skills/<n>/`
|
||||
|
||||
---
|
||||
|
||||
**ZeroClaw** — صفر عبء. صفر تنازلات. انشر في أي مكان. استبدل أي شيء. 🦀
|
||||
|
||||
## تاريخ النجوم
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
<img alt="رسم بياني لتاريخ النجوم" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>শূন্য ওভারহেড। শূন্য আপস। 100% রাস্ট। 100% অজ্ঞেয়বাদী।</strong><br>
|
||||
⚡️ <strong>$10 হার্ডওয়্যারে <5MB RAM নিয়ে চলে: এটি OpenClaw থেকে 99% কম মেমোরি এবং Mac mini থেকে 98% সস্তা!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>ভাষা:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## ZeroClaw কী?
|
||||
|
||||
ZeroClaw হল একটি হালকা, মিউটেবল এবং এক্সটেনসিবল AI অ্যাসিস্ট্যান্ট ইনফ্রাস্ট্রাকচার যা রাস্টে তৈরি। এটি বিভিন্ন LLM প্রদানকারীদের (Anthropic, OpenAI, Google, Ollama, ইত্যাদি) একটি ইউনিফাইড ইন্টারফেসের মাধ্যমে সংযুক্ত করে এবং একাধিক চ্যানেল (Telegram, Matrix, CLI, ইত্যাদি) সমর্থন করে।
|
||||
|
||||
### মূল বৈশিষ্ট্যসমূহ
|
||||
|
||||
- **🦀 রাস্টে লেখা**: উচ্চ পারফরম্যান্স, মেমোরি নিরাপত্তা, এবং জিরো-কস্ট অ্যাবস্ট্রাকশন
|
||||
- **🔌 প্রদানকারী-অজ্ঞেয়বাদী**: OpenAI, Anthropic, Google Gemini, Ollama, এবং অন্যান্য সমর্থন
|
||||
- **📱 মাল্টি-চ্যানেল**: Telegram, Matrix (E2EE সহ), CLI, এবং অন্যান্য
|
||||
- **🧠 প্লাগেবল মেমোরি**: SQLite এবং Markdown ব্যাকএন্ড
|
||||
- **🛠️ এক্সটেন্সিবল টুলস**: সহজেই কাস্টম টুল যোগ করুন
|
||||
- **🔒 নিরাপত্তা-প্রথম**: রিভার্স-প্রক্সি, গোপনীয়তা-প্রথম ডিজাইন
|
||||
|
||||
---
|
||||
|
||||
## দ্রুত শুরু
|
||||
|
||||
### প্রয়োজনীয়তা
|
||||
|
||||
- রাস্ট 1.70+
|
||||
- একটি LLM প্রদানকারী API কী (Anthropic, OpenAI, ইত্যাদি)
|
||||
|
||||
### ইনস্টলেশন
|
||||
|
||||
```bash
|
||||
# রিপোজিটরি ক্লোন করুন
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# বিল্ড করুন
|
||||
cargo build --release
|
||||
|
||||
# চালান
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Docker দিয়ে
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## কনফিগারেশন
|
||||
|
||||
ZeroClaw একটি YAML কনফিগারেশন ফাইল ব্যবহার করে। ডিফল্টরূপে, এটি `config.yaml` দেখে।
|
||||
|
||||
```yaml
|
||||
# ডিফল্ট প্রদানকারী
|
||||
provider: anthropic
|
||||
|
||||
# প্রদানকারী কনফিগারেশন
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# মেমোরি কনফিগারেশন
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# চ্যানেল কনফিগারেশন
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ডকুমেন্টেশন
|
||||
|
||||
বিস্তারিত ডকুমেন্টেশনের জন্য, দেখুন:
|
||||
|
||||
- [ডকুমেন্টেশন হাব](docs/README.md)
|
||||
- [কমান্ড রেফারেন্স](docs/commands-reference.md)
|
||||
- [প্রদানকারী রেফারেন্স](docs/providers-reference.md)
|
||||
- [চ্যানেল রেফারেন্স](docs/channels-reference.md)
|
||||
- [কনফিগারেশন রেফারেন্স](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## অবদান
|
||||
|
||||
অবদান স্বাগত! অনুগ্রহ করে [অবদান গাইড](CONTRIBUTING.md) পড়ুন।
|
||||
|
||||
---
|
||||
|
||||
## লাইসেন্স
|
||||
|
||||
এই প্রজেক্টটি ডুয়াল লাইসেন্সপ্রাপ্ত:
|
||||
|
||||
- MIT লাইসেন্স
|
||||
- Apache লাইসেন্স, সংস্করণ 2.0
|
||||
|
||||
বিস্তারিতের জন্য [LICENSE-APACHE](LICENSE-APACHE) এবং [LICENSE-MIT](LICENSE-MIT) দেখুন।
|
||||
|
||||
---
|
||||
|
||||
## কমিউনিটি
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## স্পনসর
|
||||
|
||||
যদি ZeroClaw আপনার জন্য উপযোগী হয়, তবে অনুগ্রহ করে আমাদের একটি কফি কিনতে বিবেচনা করুন:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+914
@@ -0,0 +1,914 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Nulová režie. Nulové kompromisy. 100% Rust. 100% Agnostický.</strong><br>
|
||||
⚡️ <strong>Beží na hardwaru za $10 s <5MB RAM: To je o 99% méně paměti než OpenClaw a o 98% levnější než Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
Postaveno studenty a členy komunit Harvard, MIT a Sundai.Club.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Jazyky:</strong><a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#rychlý-start">Rychlý Start</a> |
|
||||
<a href="bootstrap.sh">Jedno-klikové nastavení</a> |
|
||||
<a href="docs/README.md">Dokumentační Centrum</a> |
|
||||
<a href="docs/SUMMARY.md">Obsah Dokumentace</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Rychlý přístup:</strong>
|
||||
<a href="docs/reference/README.md">Reference</a> ·
|
||||
<a href="docs/operations/README.md">Operace</a> ·
|
||||
<a href="docs/troubleshooting.md">Řešení problémů</a> ·
|
||||
<a href="docs/security/README.md">Bezpečnost</a> ·
|
||||
<a href="docs/hardware/README.md">Hardware</a> ·
|
||||
<a href="docs/contributing/README.md">Příspívání</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Rychlá, lehká a plně autonomní AI asistent infrastruktura</strong><br />
|
||||
Nasazujte kdekoliv. Měňte cokoliv.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
ZeroClaw je <strong>operační systém runtime</strong> pro workflow agentů — infrastruktura která abstrahuje modely, nástroje, paměť a provádění pro stavbu agentů jednou a spouštění kdekoliv.
|
||||
</p>
|
||||
|
||||
<p align="center"><code>Architektura založená na traitech · bezpečný runtime defaultně · vyměnitelný poskytovatel/kanál/nástroj · vše je připojitelné</code></p>
|
||||
|
||||
### 📢 Oznámení
|
||||
|
||||
Použijte tuto tabulku pro důležitá oznámení (změny kompatibility, bezpečnostní upozornění, servisní okna a blokování verzí).
|
||||
|
||||
| Datum (UTC) | Úroveň | Oznámení | Akce |
|
||||
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 2026-02-19 | _Kritické_ | **Nejsme propojeni** s `openagen/zeroclaw` nebo `zeroclaw.org`. Doména `zeroclaw.org` aktuálně směřuje na fork `openagen/zeroclaw`, a tato doména/repoziťář se vydává za náš oficiální web/projekt. | Nevěřte informacím, binárním souborům, fundraisingu nebo oznámením z těchto zdrojů. Používejte pouze [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) a naše ověřené sociální účty. |
|
||||
| 2026-02-21 | _Důležité_ | Náš oficiální web je nyní online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Děkujeme za trpělivost během čekání. Stále detekujeme pokusy o vydávání se: neúčastněte žádné investiční/fundraisingové aktivity ve jménu ZeroClaw pokud není publikována přes naše oficiální kanály. | Používejte [tento repoziťář](https://github.com/zeroclaw-labs/zeroclaw) jako jediný zdroj pravdy. Sledujte [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (skupina)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), a [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pro oficiální aktualizace. |
|
||||
| 2026-02-19 | _Důležité_ | Anthropic aktualizoval podmínky použití autentizace a přihlašovacích údajů dne 2026-02-19. OAuth autentizace (Free, Pro, Max) je výhradně pro Claude Code a Claude.ai; použití Claude Free/Pro/Max OAuth tokenů v jakémkoliv jiném produktu, nástroji nebo službě (včetně Agent SDK) není povoleno a může porušit Podmínky použití spotřebitele. | Prosím dočasně se vyhněte Claude Code OAuth integracím pro předcházení potenciálním ztrátám. Původní klauzule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
|
||||
|
||||
### ✨ Funkce
|
||||
|
||||
- 🏎️ **Lehký Runtime Defaultně:** Běžné CLI workflowy a stavové příkazy běží v paměťovém prostoru několika megabytů v produkčních buildech.
|
||||
- 💰 **Cenově efektivní nasazení:** Navrženo pro nízkonákladové desky a malé cloud instance bez těžkých runtime závislostí.
|
||||
- ⚡ **Rychlé studené starty:** Single-binary Rust runtime udržuje start příkazů a daemonů téměř okamžitý pro denní operace.
|
||||
- 🌍 **Přenosná architektura:** Single-binary workflow na ARM, x86 a RISC-V s vyměnitelným poskytovatelem/kanálem/nástrojem.
|
||||
|
||||
### Proč týmy volí ZeroClaw
|
||||
|
||||
- **Lehký defaultně:** malý Rust binary, rychlý start, nízká paměťová stopa.
|
||||
- **Bezpečný designem:** párování, striktní sandboxing, explicitní allowlisty, workspace scope.
|
||||
- **Plně vyměnitelné:** jádrové systémy jsou traity (poskytovatelé, kanály, nástroje, paměť, tunely).
|
||||
- **Žádné vendor lock-in:** OpenAI-kompatibilní podpora poskytovatele + připojitelné vlastní endpointy.
|
||||
|
||||
## Benchmark Snapshot (ZeroClaw vs OpenClaw, Reprodukovatelné)
|
||||
|
||||
Rychlý benchmark na lokálním stroji (macOS arm64, únor 2026) normalizovaný pro 0.8 GHz edge hardware.
|
||||
|
||||
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
|
||||
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
|
||||
| **Jazyk** | TypeScript | Python | Go | **Rust** |
|
||||
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
|
||||
| **Start (0.8 GHz jádro)** | > 500s | > 30s | < 1s | **< 10ms** |
|
||||
| **Velikost Binary** | ~28 MB (dist) | N/A (Skripty) | ~8 MB | **3.4 MB** |
|
||||
| **Náklady** | Mac Mini $599 | Linux SBC ~$50 | Linux deska $10 | **Jakýkoliv hardware $10** |
|
||||
|
||||
> Poznámky: Výsledky ZeroClaw jsou měřeny na produkčních buildech pomocí `/usr/bin/time -l`. OpenClaw vyžaduje Node.js runtime (typicky ~390 MB dodatečného paměťového režijního nákladu), zatímco NanoBot vyžaduje Python runtime. PicoClaw a ZeroClaw jsou statická binaria. Výše uvedené RAM čísla jsou runtime paměť; build-time kompilační požadavky jsou vyšší.
|
||||
|
||||
<p align="center">
|
||||
<img src="zero-claw.jpeg" alt="Porovnání ZeroClaw vs OpenClaw" width="800" />
|
||||
</p>
|
||||
|
||||
### Reprodukovatelné lokální měření
|
||||
|
||||
Benchmark tvrzení se mohou měnit jak se kód a toolchainy vyvíjejí, takže vždy měřte svůj aktuální build lokálně:
|
||||
|
||||
```bash
|
||||
cargo build --release
|
||||
ls -lh target/release/zeroclaw
|
||||
|
||||
/usr/bin/time -l target/release/zeroclaw --help
|
||||
/usr/bin/time -l target/release/zeroclaw status
|
||||
```
|
||||
|
||||
Ukázková vzorka (macOS arm64, měřeno 18. února 2026):
|
||||
|
||||
- Velikost release binary: `8.8M`
|
||||
- `zeroclaw --help`: reálný čas přibližně `0.02s`, špičková paměťová stopa ~`3.9 MB`
|
||||
- `zeroclaw status`: reálný čas přibližně `0.01s`, špičková paměťová stopa ~`4.1 MB`
|
||||
|
||||
## Předpoklady
|
||||
|
||||
<details>
|
||||
<summary><strong>Windows</strong></summary>
|
||||
|
||||
### Windows — Vyžadováno
|
||||
|
||||
1. **Visual Studio Build Tools** (poskytuje MSVC linker a Windows SDK):
|
||||
|
||||
```powershell
|
||||
winget install Microsoft.VisualStudio.2022.BuildTools
|
||||
```
|
||||
|
||||
Během instalace (nebo přes Visual Studio Installer), vyberte workload **"Desktop development with C++"**.
|
||||
|
||||
2. **Rust Toolchain:**
|
||||
|
||||
```powershell
|
||||
winget install Rustlang.Rustup
|
||||
```
|
||||
|
||||
Po instalaci otevřete nový terminál a spusťte `rustup default stable` pro zajištění, že stabilní toolchain je aktivní.
|
||||
|
||||
3. **Ověřte** že oba fungují:
|
||||
```powershell
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Windows — Volitelné
|
||||
|
||||
- **Docker Desktop** — vyžadováno pouze pokud používáte [Docker sandboxed runtime](#aktuální-runtime-podpora) (`runtime.kind = "docker"`). Nainstalujte přes `winget install Docker.DockerDesktop`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Linux / macOS</strong></summary>
|
||||
|
||||
### Linux / macOS — Vyžadováno
|
||||
|
||||
1. **Essenciální build nástroje:**
|
||||
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
|
||||
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
|
||||
- **macOS:** Nainstalujte Xcode Command Line Tools: `xcode-select --install`
|
||||
|
||||
2. **Rust Toolchain:**
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
Viz [rustup.rs](https://rustup.rs) pro detaily.
|
||||
|
||||
3. **Ověřte:**
|
||||
```bash
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Linux / macOS — Volitelné
|
||||
|
||||
- **Docker** — vyžadováno pouze pokud používáte [Docker sandboxed runtime](#aktuální-runtime-podpora) (`runtime.kind = "docker"`).
|
||||
- **Linux (Debian/Ubuntu):** viz [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
|
||||
- **Linux (Fedora/RHEL):** viz [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
|
||||
- **macOS:** nainstalujte Docker Desktop přes [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
|
||||
|
||||
</details>
|
||||
|
||||
## Rychlý Start
|
||||
|
||||
### Možnost 1: Automatické nastavení (doporučeno)
|
||||
|
||||
Skript `bootstrap.sh` nainstaluje Rust, naklonuje ZeroClaw, zkompiluje ho a nastaví vaše počáteční vývojové prostředí:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
|
||||
```
|
||||
|
||||
Toto:
|
||||
|
||||
1. Nainstaluje Rust (pokud chybí)
|
||||
2. Naklonuje ZeroClaw repoziťář
|
||||
3. Zkompiluje ZeroClaw v release módu
|
||||
4. Nainstaluje `zeroclaw` do `~/.cargo/bin/`
|
||||
5. Vytvoří výchozí workspace strukturu v `~/.zeroclaw/workspace/`
|
||||
6. Vygeneruje počáteční konfigurační soubor `~/.zeroclaw/workspace/config.toml`
|
||||
|
||||
Po bootstrapu znovu načtěte váš shell nebo spusťte `source ~/.cargo/env` pro použití příkazu `zeroclaw` globálně.
|
||||
|
||||
### Možnost 2: Manuální instalace
|
||||
|
||||
<details>
|
||||
<summary><strong>Klikněte pro zobrazení kroků manuální instalace</strong></summary>
|
||||
|
||||
```bash
|
||||
# 1. Naklonujte repoziťář
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# 2. Zkompilujte v release
|
||||
cargo build --release --locked
|
||||
|
||||
# 3. Nainstalujte binary
|
||||
cargo install --path . --locked
|
||||
|
||||
# 4. Inicializujte workspace
|
||||
zeroclaw init
|
||||
|
||||
# 5. Ověřte instalaci
|
||||
zeroclaw --version
|
||||
zeroclaw status
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Po instalaci
|
||||
|
||||
Jakmile nainstalováno (přes bootstrap nebo manuálně), měli byste vidět:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/
|
||||
├── config.toml # Hlavní konfigurace
|
||||
├── .pairing # Párovací tajemství (generováno při prvním spuštění)
|
||||
├── logs/ # Daemon/agent logy
|
||||
├── skills/ # Vlastní dovednosti
|
||||
└── memory/ # Uložení konverzačního kontextu
|
||||
```
|
||||
|
||||
**Další kroky:**
|
||||
|
||||
1. Nakonfigurujte své AI poskytovatele v `~/.zeroclaw/workspace/config.toml`
|
||||
2. Podívejte se na [konfigurační referenci](docs/config-reference.md) pro pokročilé možnosti
|
||||
3. Spusťte agenta: `zeroclaw agent start`
|
||||
4. Otestujte přes váš preferovaný kanál (viz [kanálová reference](docs/channels-reference.md))
|
||||
|
||||
## Konfigurace
|
||||
|
||||
Upravte `~/.zeroclaw/workspace/config.toml` pro konfiguraci poskytovatelů, kanálů a chování systému.
|
||||
|
||||
### Rychlá konfigurační reference
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@bot:matrix.org"
|
||||
password = "..."
|
||||
|
||||
[memory]
|
||||
kind = "markdown" # nebo "sqlite" nebo "none"
|
||||
|
||||
[runtime]
|
||||
kind = "native" # nebo "docker" (vyžaduje Docker)
|
||||
```
|
||||
|
||||
**Kompletní referenční dokumenty:**
|
||||
|
||||
- [Konfigurační reference](docs/config-reference.md) — všechna nastavení, validace, výchozí hodnoty
|
||||
- [Poskytovatel reference](docs/providers-reference.md) — AI poskytovatel-specifické konfigurace
|
||||
- [Kanálová reference](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord a další
|
||||
- [Operace](docs/operations-runbook.md) — produkční monitoring, rotace tajemství, škálování
|
||||
|
||||
### Aktuální Runtime Podpora
|
||||
|
||||
ZeroClaw podporuje dva backendy provádění kódu:
|
||||
|
||||
- **`native`** (výchozí) — přímé provedení procesu, nejrychlejší cesta, ideální pro důvěryhodná prostředí
|
||||
- **`docker`** — plná kontejnerová izolace, zpřísněné bezpečnostní politiky, vyžaduje Docker
|
||||
|
||||
Použijte `runtime.kind = "docker"` pokud potřebujete striktní sandboxing nebo síťovou izolaci. Viz [konfigurační reference](docs/config-reference.md#runtime) pro úplné detaily.
|
||||
|
||||
## Příkazy
|
||||
|
||||
```bash
|
||||
# Správa workspace
|
||||
zeroclaw init # Inicializuje nový workspace
|
||||
zeroclaw status # Zobrazuje stav daemon/agent
|
||||
zeroclaw config validate # Ověřuje syntaxi a hodnoty config.toml
|
||||
|
||||
# Správa daemon
|
||||
zeroclaw daemon start # Spouští daemon na pozadí
|
||||
zeroclaw daemon stop # Zastavuje běžící daemon
|
||||
zeroclaw daemon restart # Restartuje daemon (znovunačtení config)
|
||||
zeroclaw daemon logs # Zobrazuje daemon logy
|
||||
|
||||
# Správa agent
|
||||
zeroclaw agent start # Spouští agenta (vyžaduje běžící daemon)
|
||||
zeroclaw agent stop # Zastavuje agenta
|
||||
zeroclaw agent restart # Restartuje agenta (znovunačtení config)
|
||||
|
||||
# Párovací operace
|
||||
zeroclaw pairing init # Generuje nové párovací tajemství
|
||||
zeroclaw pairing rotate # Rotuje existující párovací tajemství
|
||||
|
||||
# Tunneling (pro veřejnou expozici)
|
||||
zeroclaw tunnel start # Spouští tunnel k lokálnímu daemon
|
||||
zeroclaw tunnel stop # Zastavuje aktivní tunnel
|
||||
|
||||
# Diagnostika
|
||||
zeroclaw doctor # Spouští kontroly zdraví systému
|
||||
zeroclaw version # Zobrazuje verzi a build informace
|
||||
```
|
||||
|
||||
Viz [Příkazová reference](docs/commands-reference.md) pro kompletní možnosti a příklady.
|
||||
|
||||
## Architektura
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Kanály (trait) │
|
||||
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Agent Orchestrátor │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Směrování │ │ Kontext │ │ Provedení │ │
|
||||
│ │ Zpráva │ │ Paměť │ │ Nástroj │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Poskytovatel│ │ Paměť │ │ Nástroje │
|
||||
│ (trait) │ │ (trait) │ │ (trait) │
|
||||
├──────────────┤ ├──────────────┤ ├──────────────┤
|
||||
│ Anthropic │ │ Markdown │ │ Filesystem │
|
||||
│ OpenAI │ │ SQLite │ │ Bash │
|
||||
│ Gemini │ │ None │ │ Web Fetch │
|
||||
│ Ollama │ │ Custom │ │ Custom │
|
||||
│ Custom │ └──────────────┘ └──────────────┘
|
||||
└──────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Runtime (trait) │
|
||||
│ Native │ Docker │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Klíčové principy:**
|
||||
|
||||
- Vše je **trait** — poskytovatelé, kanály, nástroje, paměť, tunely
|
||||
- Kanály volají orchestrátor; orchestrátor volá poskytovatele + nástroje
|
||||
- Paměťový systém spravuje konverzační kontext (markdown, SQLite, nebo žádný)
|
||||
- Runtime abstrahuje provádění kódu (nativní nebo Docker)
|
||||
- Žádné vendor lock-in — vyměňujte Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama beze změn kódu
|
||||
|
||||
Viz [dokumentace architektury](docs/architecture.svg) pro detailní diagramy a detaily implementace.
|
||||
|
||||
## Příklady
|
||||
|
||||
### Telegram Bot
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
allowed_users = [987654321] # Vaše Telegram user ID
|
||||
```
|
||||
|
||||
Spusťte daemon + agent, pak pošlete zprávu vašemu botovi na Telegram:
|
||||
|
||||
```
|
||||
/start
|
||||
Ahoj! Mohl bys mi pomoci napsat Python skript?
|
||||
```
|
||||
|
||||
Bot odpoví AI-generovaným kódem, provede nástroje pokud požadováno a udržuje konverzační kontext.
|
||||
|
||||
### Matrix (end-to-end šifrování)
|
||||
|
||||
```toml
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@zeroclaw:matrix.org"
|
||||
password = "..."
|
||||
device_name = "zeroclaw-prod"
|
||||
e2ee_enabled = true
|
||||
```
|
||||
|
||||
Pozvěte `@zeroclaw:matrix.org` do šifrované místnosti a bot odpoví s plným šifrováním. Viz [Matrix E2EE Guide](docs/matrix-e2ee-guide.md) pro nastavení ověření zařízení.
|
||||
|
||||
### Multi-Poskytovatel
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
enabled = true
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
enabled = true
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[orchestrator]
|
||||
default_provider = "anthropic"
|
||||
fallback_providers = ["openai"] # Failover při chybě poskytovatele
|
||||
```
|
||||
|
||||
Pokud Anthropic selže nebo má rate-limit, orchestrátor automaticky přepne na OpenAI.
|
||||
|
||||
### Vlastní Paměť
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "sqlite"
|
||||
path = "~/.zeroclaw/workspace/memory/conversations.db"
|
||||
retention_days = 90 # Automatické čištění po 90 dnech
|
||||
```
|
||||
|
||||
Nebo použijte Markdown pro lidsky čitelné ukládání:
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "markdown"
|
||||
path = "~/.zeroclaw/workspace/memory/"
|
||||
```
|
||||
|
||||
Viz [Konfigurační reference](docs/config-reference.md#memory) pro všechny možnosti paměti.
|
||||
|
||||
## Podpora Poskytovatelů
|
||||
|
||||
| Poskytovatel | Stav | API Klíč | Příklad Modelů |
|
||||
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
|
||||
| **Anthropic** | ✅ Stabilní | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
|
||||
| **OpenAI** | ✅ Stabilní | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
|
||||
| **Google Gemini** | ✅ Stabilní | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
|
||||
| **Ollama** | ✅ Stabilní | N/A (lokální) | `llama3.3`, `qwen2.5`, `phi4` |
|
||||
| **Cerebras** | ✅ Stabilní | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
|
||||
| **Groq** | ✅ Stabilní | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
|
||||
| **Mistral** | 🚧 Plánováno | `MISTRAL_API_KEY` | TBD |
|
||||
| **Cohere** | 🚧 Plánováno | `COHERE_API_KEY` | TBD |
|
||||
|
||||
### Vlastní Endpointy
|
||||
|
||||
ZeroClaw podporuje OpenAI-kompatibilní endpointy:
|
||||
|
||||
```toml
|
||||
[providers.custom]
|
||||
enabled = true
|
||||
api_key = "..."
|
||||
base_url = "https://api.your-llm-provider.com/v1"
|
||||
model = "your-model-name"
|
||||
```
|
||||
|
||||
Příklad: použijte [LiteLLM](https://github.com/BerriAI/litellm) jako proxy pro přístup k jakémukoli LLM přes OpenAI rozhraní.
|
||||
|
||||
Viz [Poskytovatel reference](docs/providers-reference.md) pro kompletní detaily konfigurace.
|
||||
|
||||
## Podpora Kanálů
|
||||
|
||||
| Kanál | Stav | Autentizace | Poznámky |
|
||||
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
|
||||
| **Telegram** | ✅ Stabilní | Bot Token | Plná podpora včetně souborů, obrázků, inline tlačítek |
|
||||
| **Matrix** | ✅ Stabilní | Heslo nebo Token | E2EE podpora s ověřením zařízení |
|
||||
| **Slack** | 🚧 Plánováno | OAuth nebo Bot Token | Vyžaduje workspace přístup |
|
||||
| **Discord** | 🚧 Plánováno | Bot Token | Vyžaduje guild oprávnění |
|
||||
| **WhatsApp** | 🚧 Plánováno | Twilio nebo oficiální API | Vyžaduje business účet |
|
||||
| **CLI** | ✅ Stabilní | Žádné | Přímé konverzační rozhraní |
|
||||
| **Web** | 🚧 Plánováno | API Klíč nebo OAuth | Prohlížečové chat rozhraní |
|
||||
|
||||
Viz [Kanálová reference](docs/channels-reference.md) pro kompletní instrukce konfigurace.
|
||||
|
||||
## Podpora Nástrojů
|
||||
|
||||
ZeroClaw poskytuje vestavěné nástroje pro provádění kódu, přístup k souborovému systému a web retrieval:
|
||||
|
||||
| Nástroj | Popis | Vyžadovaný Runtime |
|
||||
| -------------------- | --------------------------- | ----------------------------- |
|
||||
| **bash** | Provádí shell příkazy | Nativní nebo Docker |
|
||||
| **python** | Provádí Python skripty | Python 3.8+ (nativní) nebo Docker |
|
||||
| **javascript** | Provádí Node.js kód | Node.js 18+ (nativní) nebo Docker |
|
||||
| **filesystem_read** | Čte soubory | Nativní nebo Docker |
|
||||
| **filesystem_write** | Zapisuje soubory | Nativní nebo Docker |
|
||||
| **web_fetch** | Získává web obsah | Nativní nebo Docker |
|
||||
|
||||
### Bezpečnost Provedení
|
||||
|
||||
- **Nativní Runtime** — běží jako uživatelský proces daemon, plný přístup k souborovému systému
|
||||
- **Docker Runtime** — plná kontejnerová izolace, oddělené souborové systémy a sítě
|
||||
|
||||
Nakonfigurujte politiku provedení v `config.toml`:
|
||||
|
||||
```toml
|
||||
[runtime]
|
||||
kind = "docker"
|
||||
allowed_tools = ["bash", "python", "filesystem_read"] # Explicitní allowlist
|
||||
```
|
||||
|
||||
Viz [Konfigurační reference](docs/config-reference.md#runtime) pro kompletní možnosti bezpečnosti.
|
||||
|
||||
## Nasazení
|
||||
|
||||
### Lokální Nasazení (Vývoj)
|
||||
|
||||
```bash
|
||||
zeroclaw daemon start
|
||||
zeroclaw agent start
|
||||
```
|
||||
|
||||
### Serverové Nasazení (Produkce)
|
||||
|
||||
Použijte systemd pro správu daemon a agent jako služby:
|
||||
|
||||
```bash
|
||||
# Nainstalujte binary
|
||||
cargo install --path . --locked
|
||||
|
||||
# Nakonfigurujte workspace
|
||||
zeroclaw init
|
||||
|
||||
# Vytvořte systemd servisní soubory
|
||||
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
|
||||
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
|
||||
|
||||
# Povolte a spusťte služby
|
||||
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
|
||||
sudo systemctl start zeroclaw-daemon zeroclaw-agent
|
||||
|
||||
# Ověřte stav
|
||||
sudo systemctl status zeroclaw-daemon
|
||||
sudo systemctl status zeroclaw-agent
|
||||
```
|
||||
|
||||
Viz [Průvodce síťovým nasazením](docs/network-deployment.md) pro kompletní instrukce produkčního nasazení.
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# Sestavte image
|
||||
docker build -t zeroclaw:latest .
|
||||
|
||||
# Spusťte kontejner
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-v ~/.zeroclaw/workspace:/workspace \
|
||||
-e ANTHROPIC_API_KEY=sk-ant-... \
|
||||
zeroclaw:latest
|
||||
```
|
||||
|
||||
Viz [`Dockerfile`](Dockerfile) pro detaily sestavení a konfigurační možnosti.
|
||||
|
||||
### Edge Hardware
|
||||
|
||||
ZeroClaw je navržen pro běh na nízko-příkonovém hardwaru:
|
||||
|
||||
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, jedno ARMv8 jádro, < $5 hardwarové náklady
|
||||
- **Raspberry Pi 4/5** — 1 GB+ RAM, vícejádrový, ideální pro souběžné úlohy
|
||||
- **Orange Pi Zero 2** — ~512 MB RAM, čtyřjádrový ARMv8, ultra-nízké náklady
|
||||
- **x86 SBCs (Intel N100)** — 4-8 GB RAM, rychlé buildy, nativní Docker podpora
|
||||
|
||||
Viz [Hardware Guide](docs/hardware/README.md) pro instrukce nastavení specifické pro zařízení.
|
||||
|
||||
## Tunneling (Veřejná Expozice)
|
||||
|
||||
Exponujte svůj lokální ZeroClaw daemon do veřejné sítě přes bezpečné tunely:
|
||||
|
||||
```bash
|
||||
zeroclaw tunnel start --provider cloudflare
|
||||
```
|
||||
|
||||
Podporovaní tunnel poskytovatelé:
|
||||
|
||||
- **Cloudflare Tunnel** — bezplatný HTTPS, bez expozice portů, multi-doména podpora
|
||||
- **Ngrok** — rychlé nastavení, vlastní domény (placený plán)
|
||||
- **Tailscale** — soukromá mesh síť, bez veřejného portu
|
||||
|
||||
Viz [Konfigurační reference](docs/config-reference.md#tunnel) pro kompletní konfigurační možnosti.
|
||||
|
||||
## Bezpečnost
|
||||
|
||||
ZeroClaw implementuje více vrstev bezpečnosti:
|
||||
|
||||
### Párování
|
||||
|
||||
Daemon generuje párovací tajemství při prvním spuštění uložené v `~/.zeroclaw/workspace/.pairing`. Klienti (agent, CLI) musí předložit toto tajemství pro připojení.
|
||||
|
||||
```bash
|
||||
zeroclaw pairing rotate # Generuje nové tajemství a zneplatňuje staré
|
||||
```
|
||||
|
||||
### Sandboxing
|
||||
|
||||
- **Docker Runtime** — plná kontejnerová izolace s oddělenými souborovými systémy a sítěmi
|
||||
- **Nativní Runtime** — běží jako uživatelský proces, scoped na workspace defaultně
|
||||
|
||||
### Allowlisty
|
||||
|
||||
Kanály mohou omezit přístup podle user ID:
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
allowed_users = [123456789, 987654321] # Explicitní allowlist
|
||||
```
|
||||
|
||||
### Šifrování
|
||||
|
||||
- **Matrix E2EE** — plné end-to-end šifrování s ověřením zařízení
|
||||
- **TLS Transport** — veškerý API a tunnel provoz používá HTTPS/TLS
|
||||
|
||||
Viz [Bezpečnostní dokumentace](docs/security/README.md) pro kompletní politiky a praktiky.
|
||||
|
||||
## Pozorovatelnost
|
||||
|
||||
ZeroClaw loguje do `~/.zeroclaw/workspace/logs/` defaultně. Logy jsou ukládány podle komponenty:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/logs/
|
||||
├── daemon.log # Daemon logy (startup, API požadavky, chyby)
|
||||
├── agent.log # Agent logy (směrování zpráv, provedení nástrojů)
|
||||
├── telegram.log # Kanál-specifické logy (pokud povoleno)
|
||||
└── matrix.log # Kanál-specifické logy (pokud povoleno)
|
||||
```
|
||||
|
||||
### Konfigurace Logování
|
||||
|
||||
```toml
|
||||
[logging]
|
||||
level = "info" # debug, info, warn, error
|
||||
path = "~/.zeroclaw/workspace/logs/"
|
||||
rotation = "daily" # daily, hourly, size
|
||||
max_size_mb = 100 # Pro rotaci založenou na velikosti
|
||||
retention_days = 30 # Automatické čištění po N dnech
|
||||
```
|
||||
|
||||
Viz [Konfigurační reference](docs/config-reference.md#logging) pro všechny možnosti logování.
|
||||
|
||||
### Metriky (Plánováno)
|
||||
|
||||
Podpora Prometheus metrik pro produkční monitoring již brzy. Sledování v [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
|
||||
|
||||
## Dovednosti
|
||||
|
||||
ZeroClaw podporuje vlastní dovednosti — opakovaně použitelné moduly rozšiřující schopnosti systému.
|
||||
|
||||
### Definice Dovednosti
|
||||
|
||||
Dovednosti jsou uloženy v `~/.zeroclaw/workspace/skills/<skill-name>/` s touto strukturou:
|
||||
|
||||
```
|
||||
skills/
|
||||
└── my-skill/
|
||||
├── skill.toml # Metadata dovednosti (název, popis, závislosti)
|
||||
├── prompt.md # Systémový prompt pro AI
|
||||
└── tools/ # Volitelné vlastní nástroje
|
||||
└── my_tool.py
|
||||
```
|
||||
|
||||
### Příklad Dovednosti
|
||||
|
||||
```toml
|
||||
# skills/web-research/skill.toml
|
||||
[skill]
|
||||
name = "web-research"
|
||||
description = "Hledá na webu a shrnuje výsledky"
|
||||
version = "1.0.0"
|
||||
|
||||
[dependencies]
|
||||
tools = ["web_fetch", "bash"]
|
||||
```
|
||||
|
||||
```markdown
|
||||
<!-- skills/web-research/prompt.md -->
|
||||
|
||||
Jste výzkumný asistent. Když požádáte o výzkum něčeho:
|
||||
|
||||
1. Použijte web_fetch pro získání obsahu
|
||||
2. Shrňte výsledky v snadno čitelném formátu
|
||||
3. Citujte zdroje s URL
|
||||
```
|
||||
|
||||
### Použití Dovedností
|
||||
|
||||
Dovednosti jsou automaticky načítány při startu agenta. Odkazujte na ně jménem v konverzacích:
|
||||
|
||||
```
|
||||
Uživatel: Použij dovednost web-research k nalezení nejnovějších AI zpráv
|
||||
Bot: [načte dovednost web-research, provede web_fetch, shrne výsledky]
|
||||
```
|
||||
|
||||
Viz sekce [Dovednosti](#dovednosti) pro kompletní instrukce tvorby dovedností.
|
||||
|
||||
## Open Skills
|
||||
|
||||
ZeroClaw podporuje [Open Skills](https://github.com/openagents-com/open-skills) — modulární a poskytovatel-agnostický systém pro rozšíření schopností AI agentů.
|
||||
|
||||
### Povolit Open Skills
|
||||
|
||||
```toml
|
||||
[skills]
|
||||
open_skills_enabled = true
|
||||
# open_skills_dir = "/path/to/open-skills" # volitelné
|
||||
```
|
||||
|
||||
Můžete také přepsat za běhu pomocí `ZEROCLAW_OPEN_SKILLS_ENABLED` a `ZEROCLAW_OPEN_SKILLS_DIR`.
|
||||
|
||||
## Vývoj
|
||||
|
||||
```bash
|
||||
cargo build # Dev build
|
||||
cargo build --release # Release build (codegen-units=1, funguje na všech zařízeních včetně Raspberry Pi)
|
||||
cargo build --profile release-fast # Rychlejší build (codegen-units=8, vyžaduje 16 GB+ RAM)
|
||||
cargo test # Spustí plnou testovací sadu
|
||||
cargo clippy --locked --all-targets -- -D clippy::correctness
|
||||
cargo fmt # Formátování
|
||||
|
||||
# Spusťte SQLite vs Markdown srovnávací benchmark
|
||||
cargo test --test memory_comparison -- --nocapture
|
||||
```
|
||||
|
||||
### Pre-push hook
|
||||
|
||||
Git hook spouští `cargo fmt --check`, `cargo clippy -- -D warnings`, a `cargo test` před každým push. Povolte jej jednou:
|
||||
|
||||
```bash
|
||||
git config core.hooksPath .githooks
|
||||
```
|
||||
|
||||
### Řešení problémů s Buildem (OpenSSL chyby na Linuxu)
|
||||
|
||||
Pokud narazíte na `openssl-sys` build chybu, synchronizujte závislosti a znovu zkompilujte s lockfile repoziťáře:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
cargo build --release --locked
|
||||
cargo install --path . --force --locked
|
||||
```
|
||||
|
||||
ZeroClaw je nakonfigurován pro použití `rustls` pro HTTP/TLS závislosti; `--locked` udržuje transitivní graf deterministický v čistých prostředích.
|
||||
|
||||
Pro přeskočení hooku když potřebujete rychlý push během vývoje:
|
||||
|
||||
```bash
|
||||
git push --no-verify
|
||||
```
|
||||
|
||||
## Spolupráce & Docs
|
||||
|
||||
Začněte s dokumentačním centrem pro task-based mapu:
|
||||
|
||||
- Dokumentační Centrum: [`docs/README.md`](docs/README.md)
|
||||
- Sjednocený Docs TOC: [`docs/SUMMARY.md`](docs/SUMMARY.md)
|
||||
- Příkazová reference: [`docs/commands-reference.md`](docs/commands-reference.md)
|
||||
- Konfigurační reference: [`docs/config-reference.md`](docs/config-reference.md)
|
||||
- Poskytovatel reference: [`docs/providers-reference.md`](docs/providers-reference.md)
|
||||
- Kanálová reference: [`docs/channels-reference.md`](docs/channels-reference.md)
|
||||
- Operations Runbook: [`docs/operations-runbook.md`](docs/operations-runbook.md)
|
||||
- Řešení problémů: [`docs/troubleshooting.md`](docs/troubleshooting.md)
|
||||
- Docs Inventář/Klasifikace: [`docs/docs-inventory.md`](docs/docs-inventory.md)
|
||||
- PR/Issue Triage Snapshot (k 18. únoru 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
|
||||
|
||||
Hlavní spolupráční reference:
|
||||
|
||||
- Dokumentační Centrum: [docs/README.md](docs/README.md)
|
||||
- Šablona dokumentace: [docs/doc-template.md](docs/doc-template.md)
|
||||
- Checklist změn dokumentace: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- Reference konfigurace kanálů: [docs/channels-reference.md](docs/channels-reference.md)
|
||||
- Operace šifrovaných místností Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
|
||||
- Průvodce příspíváním: [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- PR Workflow politika: [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- Reviewer Playbook (triage + hluboká recenze): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- Mapa vlastnictví a CI triage: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Bezpečnostní disclosure politika: [SECURITY.md](SECURITY.md)
|
||||
|
||||
Pro nasazení a runtime operace:
|
||||
|
||||
- Průvodce síťovým nasazením: [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- Proxy Agent Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
|
||||
## Podpořte ZeroClaw
|
||||
|
||||
Pokud ZeroClaw pomáhá vaší práci a chcete podpořit pokračující vývoj, můžete darovat zde:
|
||||
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Kup Mi Kávu" /></a>
|
||||
|
||||
### 🙏 Speciální Poděkování
|
||||
|
||||
Upřímné poděkování komunitám a institucím které inspirují a živí tuto open-source práci:
|
||||
|
||||
- **Harvard University** — za podporu intelektuální zvídavosti a posouvání hranic toho co je možné.
|
||||
- **MIT** — za obhajobu otevřeného vědění, open source, a přesvědčení že technologie by měla být přístupná všem.
|
||||
- **Sundai Club** — za komunitu, energii, a neustálou vůli stavět věci které na něčem záleží.
|
||||
- **Svět a Dál** 🌍✨ — každému přispěvateli, snílkovi, a staviteli tam venku který dělá z open source sílu pro dobro. To je pro tebe.
|
||||
|
||||
Stavíme v open source protože nejlepší nápady přicházejí odkudkoliv. Pokud toto čtete, jste součástí toho. Vítejte. 🦀❤️
|
||||
|
||||
## ⚠️ Oficiální Repoziťář a Varování před Vydáváním se
|
||||
|
||||
**Toto je jediný oficiální ZeroClaw repoziťář:**
|
||||
|
||||
> <https://github.com/zeroclaw-labs/zeroclaw>
|
||||
|
||||
Jakýkoliv jiný repoziťář, organizace, doména nebo balík tvrdící že je "ZeroClaw" nebo naznačující afiliaci s ZeroClaw Labs je **neautorizovaný a není spojen s tímto projektem**. Známé neautorizované forky budou uvedeny v [TRADEMARK.md](TRADEMARK.md).
|
||||
|
||||
Pokud narazíte na vydávání se nebo zneužití ochranné známky, prosím [otevřete issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
|
||||
|
||||
---
|
||||
|
||||
## Licence
|
||||
|
||||
ZeroClaw je duálně licencován pro maximální otevřenost a ochranu přispěvatelů:
|
||||
|
||||
| Licence | Případy použití |
|
||||
| ---------------------------- | ------------------------------------------------------------ |
|
||||
| [MIT](LICENSE-MIT) | Open-source, výzkum, akademické, osobní použití |
|
||||
| [Apache 2.0](LICENSE-APACHE) | Ochrana patentů, institucionální, komerční nasazení |
|
||||
|
||||
Můžete si vybrat jednu z licencí. **Přispěvatelé automaticky udělují práva pod oběma** — viz [CLA.md](CLA.md) pro plnou dohodu přispěvatele.
|
||||
|
||||
### Ochranná známka
|
||||
|
||||
Název **ZeroClaw** a logo jsou registrované ochranné známky ZeroClaw Labs. Tato licence neuděluje povolení je používat k naznačení schválení nebo afiliace. Viz [TRADEMARK.md](TRADEMARK.md) pro povolená a zakázaná použití.
|
||||
|
||||
### Ochrany přispěvatelů
|
||||
|
||||
- **Si zachováváte autorská práva** k vašim příspěvkům
|
||||
- **Patentový grant** (Apache 2.0) vás chrání před patentovými nároky ostatních přispěvatelů
|
||||
- Vaše příspěvky jsou **trvale připsány** v historii commitů a [NOTICE](NOTICE)
|
||||
- Žádná práva ochranné známky nejsou přenesena příspěvkem
|
||||
|
||||
## Příspívání
|
||||
|
||||
Viz [CONTRIBUTING.md](CONTRIBUTING.md) a [CLA.md](CLA.md). Implementujte trait, odešlete PR:
|
||||
|
||||
- Průvodce CI workflow: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Nový `Provider` → `src/providers/`
|
||||
- Nový `Channel` → `src/channels/`
|
||||
- Nový `Observer` → `src/observability/`
|
||||
- Nový `Tool` → `src/tools/`
|
||||
- Nová `Memory` → `src/memory/`
|
||||
- Nový `Tunnel` → `src/tunnel/`
|
||||
- Nová `Skill` → `~/.zeroclaw/workspace/skills/<n>/`
|
||||
|
||||
---
|
||||
|
||||
**ZeroClaw** — Nulová režie. Nulové kompromisy. Nasazujte kdekoliv. Měňte cokoliv. 🦀
|
||||
|
||||
## Historie Hvězd
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
<img alt="Graf Historie Hvězd" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Nul overhead. Nul kompromis. 100% Rust. 100% Agnostisk.</strong><br>
|
||||
⚡️ <strong>Kører på $10 hardware med <5MB RAM: Det er 99% mindre hukommelse end OpenClaw og 98% billigere end en Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Sprog:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Hvad er ZeroClaw?
|
||||
|
||||
ZeroClaw er en letvægts, foranderlig og udvidbar AI-assistent-infrastruktur bygget i Rust. Den forbinder forskellige LLM-udbydere (Anthropic, OpenAI, Google, Ollama osv.) via en samlet grænseflade og understøtter flere kanaler (Telegram, Matrix, CLI osv.).
|
||||
|
||||
### Nøglefunktioner
|
||||
|
||||
- **🦀 Skrevet i Rust**: Høj ydeevne, hukommelsessikkerhed og nul-omkostningsabstraktioner
|
||||
- **🔌 Udbyder-agnostisk**: Understøtter OpenAI, Anthropic, Google Gemini, Ollama og andre
|
||||
- **📱 Multi-kanal**: Telegram, Matrix (med E2EE), CLI og andre
|
||||
- **🧠 Pluggbar hukommelse**: SQLite og Markdown-backends
|
||||
- **🛠️ Udvidbare værktøjer**: Tilføj brugerdefinerede værktøjer nemt
|
||||
- **🔒 Sikkerhed først**: Omvendt proxy, privatlivs-først design
|
||||
|
||||
---
|
||||
|
||||
## Hurtig Start
|
||||
|
||||
### Krav
|
||||
|
||||
- Rust 1.70+
|
||||
- En LLM-udbyder API-nøgle (Anthropic, OpenAI osv.)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Klon repository
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# Byg
|
||||
cargo build --release
|
||||
|
||||
# Kør
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Med Docker
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Konfiguration
|
||||
|
||||
ZeroClaw bruger en YAML-konfigurationsfil. Som standard leder den efter `config.yaml`.
|
||||
|
||||
```yaml
|
||||
# Standardudbyder
|
||||
provider: anthropic
|
||||
|
||||
# Udbyderkonfiguration
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# Hukommelseskonfiguration
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# Kanalkonfiguration
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dokumentation
|
||||
|
||||
For detaljeret dokumentation, se:
|
||||
|
||||
- [Dokumentationshub](docs/README.md)
|
||||
- [Kommandoreference](docs/commands-reference.md)
|
||||
- [Udbyderreference](docs/providers-reference.md)
|
||||
- [Kanalreference](docs/channels-reference.md)
|
||||
- [Konfigurationsreference](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Bidrag
|
||||
|
||||
Bidrag er velkomne! Læs venligst [Bidragsguiden](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
## Licens
|
||||
|
||||
Dette projekt er dobbelt-licenseret:
|
||||
|
||||
- MIT License
|
||||
- Apache License, version 2.0
|
||||
|
||||
Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
|
||||
|
||||
---
|
||||
|
||||
## Fællesskab
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## Sponsorer
|
||||
|
||||
Hvis ZeroClaw er nyttigt for dig, overvej venligst at købe os en kaffe:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+918
@@ -0,0 +1,918 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Null Overhead. Null Kompromiss. 100% Rust. 100% Agnostisch.</strong><br>
|
||||
⚡️ <strong>Läuft auf 10$ Hardware mit <5MB RAM: Das ist 99% weniger Speicher als OpenClaw und 98% günstiger als ein Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
Erstellt von Studenten und Mitgliedern der Harvard, MIT und Sundai.Club Gemeinschaften.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Sprachen:</strong><a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#schnellstart">Schnellstart</a> |
|
||||
<a href="bootstrap.sh">Ein-Klick-Einrichtung</a> |
|
||||
<a href="docs/README.md">Dokumentations-Hub</a> |
|
||||
<a href="docs/SUMMARY.md">Dokumentations-Inhaltsverzeichnis</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<em>📝 Hinweis: Die Dokumentationslinks verweisen auf die englischsprachige Dokumentation. Lokalisierte Dokumentation für Deutsch ist noch nicht verfügbar.</em>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Schnellzugriffe:</strong>
|
||||
<a href="docs/reference/README.md">Referenz</a> ·
|
||||
<a href="docs/operations/README.md">Betrieb</a> ·
|
||||
<a href="docs/troubleshooting.md">Fehlerbehebung</a> ·
|
||||
<a href="docs/security/README.md">Sicherheit</a> ·
|
||||
<a href="docs/hardware/README.md">Hardware</a> ·
|
||||
<a href="docs/contributing/README.md">Mitwirken</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Schnelle, leichtgewichtige und vollständig autonome KI-Assistenten-Infrastruktur</strong><br />
|
||||
Deploy überall. Tausche alles.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
ZeroClaw ist das <strong>Runtime-Betriebssystem</strong> für Agenten-Workflows — eine Infrastruktur, die Modelle, Tools, Speicher und Ausführung abstrahiert, um Agenten einmal zu bauen und überall auszuführen.
|
||||
</p>
|
||||
|
||||
<p align="center"><code>Trait-basierte Architektur · sicheres Runtime standardmäßig · Provider/Channel/Tool austauschbar · alles ist steckbar</code></p>
|
||||
|
||||
### 📢 Ankündigungen
|
||||
|
||||
Verwende diese Tabelle für wichtige Hinweise (Kompatibilitätsänderungen, Sicherheitshinweise, Wartungsfenster und Versionsblockierungen).
|
||||
|
||||
| Datum (UTC) | Ebene | Hinweis | Aktion |
|
||||
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 2026-02-19 | _Kritisch_ | Wir sind **nicht verbunden** mit `openagen/zeroclaw` oder `zeroclaw.org`. Die Domain `zeroclaw.org` zeigt derzeit auf den Fork `openagen/zeroclaw`, und diese Domain/Repository fälscht unsere offizielle Website/Projekt. | Vertraue keinen Informationen, Binärdateien, Fundraising oder Ankündigungen aus diesen Quellen. Verwende nur [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) und unsere verifizierten Social-Media-Konten. |
|
||||
| 2026-02-21 | _Wichtig_ | Unsere offizielle Website ist jetzt online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Danke für deine Geduld während der Wartezeit. Wir erkennen weiterhin Fälschungsversuche: nimm an keiner Investitions-/Finanzierungsaktivität im Namen von ZeroClaw teil, wenn sie nicht über unsere offiziellen Kanäle veröffentlicht wird. | Verwende [dieses Repository](https://github.com/zeroclaw-labs/zeroclaw) als einzige Quelle der Wahrheit. Folge [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (Gruppe)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), und [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) für offizielle Updates. |
|
||||
| 2026-02-19 | _Wichtig_ | Anthropic hat die Nutzungsbedingungen für Authentifizierung und Anmeldedaten am 2026-02-19 aktualisiert. Die OAuth-Authentifizierung (Free, Pro, Max) ist ausschließlich für Claude Code und Claude.ai; die Verwendung von Claude Free/Pro/Max OAuth-Token in einem anderen Produkt, Tool oder Dienst (einschließlich Agent SDK) ist nicht erlaubt und kann gegen die Verbrauchernutzungsbedingungen verstoßen. | Bitte vermeide vorübergehend Claude Code OAuth-Integrationen, um potenzielle Verluste zu verhindern. Originalklausel: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
|
||||
|
||||
### ✨ Funktionen
|
||||
|
||||
- 🏎️ **Leichtgewichtiges Runtime standardmäßig:** Gängige CLI-Workflows und Statusbefehle laufen in einem Speicherbereich von wenigen Megabyte bei Produktions-Builds.
|
||||
- 💰 **Kosteneffizientes Deployment:** Entwickelt für Low-Cost-Boards und kleine Cloud-Instanzen ohne schwere Runtime-Abhängigkeiten.
|
||||
- ⚡ **Schnelle Kaltstarts:** Die Single-Binary-Rust-Runtime hält Befehls- und Daemon-Starts für tägliche Operationen nahezu augenblicklich.
|
||||
- 🌍 **Portable Architektur:** Ein Single-Binary-Workflow auf ARM, x86 und RISC-V mit austauschbaren Providern/Channels/Tools.
|
||||
|
||||
### Warum Teams ZeroClaw wählen
|
||||
|
||||
- **Leichtgewichtig standardmäßig:** kleines Rust-Binary, schneller Start, geringer Speicherbedarf.
|
||||
- **Sicher by Design:** Pairing, striktes Sandboxing, explizite Allowlists, Workspace-Scope.
|
||||
- **Vollständig austauschbar:** Kernsysteme sind Traits (Provider, Channels, Tools, Speicher, Tunnel).
|
||||
- **Kein Provider-Lock-in:** OpenAI-kompatible Provider-Unterstützung + steckbare Custom-Endpoints.
|
||||
|
||||
## Benchmark-Snapshot (ZeroClaw vs OpenClaw, Reproduzierbar)
|
||||
|
||||
Schneller Benchmark auf lokalem Rechner (macOS arm64, Feb. 2026) normalisiert für 0.8 GHz Edge-Hardware.
|
||||
|
||||
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
|
||||
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
|
||||
| **Sprache** | TypeScript | Python | Go | **Rust** |
|
||||
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
|
||||
| **Start (0.8 GHz Kern)** | > 500s | > 30s | < 1s | **< 10ms** |
|
||||
| **Binary-Größe** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
|
||||
| **Kosten** | Mac Mini $599 | Linux SBC ~$50 | Linux-Board $10 | **Jede Hardware $10** |
|
||||
|
||||
> Hinweise: ZeroClaw-Ergebnisse werden auf Produktions-Builds mit `/usr/bin/time -l` gemessen. OpenClaw benötigt die Node.js-Runtime (typischerweise ~390 MB zusätzlicher Speicher-Overhead), während NanoBot die Python-Runtime benötigt. PicoClaw und ZeroClaw sind statische Binaries. Die oben genannten RAM-Zahlen sind Runtime-Speicher; Build-time-Kompilierungsanforderungen sind höher.
|
||||
|
||||
<p align="center">
|
||||
<img src="zero-claw.jpeg" alt="ZeroClaw vs OpenClaw Vergleich" width="800" />
|
||||
</p>
|
||||
|
||||
### Reproduzierbare lokale Messung
|
||||
|
||||
Benchmark-Behauptungen können sich ändern, wenn Code und Toolchains sich weiterentwickeln, also miss deinen aktuellen Build immer lokal:
|
||||
|
||||
```bash
|
||||
cargo build --release
|
||||
ls -lh target/release/zeroclaw
|
||||
|
||||
/usr/bin/time -l target/release/zeroclaw --help
|
||||
/usr/bin/time -l target/release/zeroclaw status
|
||||
```
|
||||
|
||||
Beispielstichprobe (macOS arm64, gemessen am 18. Februar 2026):
|
||||
|
||||
- Release-Binary-Größe: `8.8M`
|
||||
- `zeroclaw --help`: Echtzeit ca. `0.02s`, maximaler Speicherbedarf ~`3.9 MB`
|
||||
- `zeroclaw status`: Echtzeit ca. `0.01s`, maximaler Speicherbedarf ~`4.1 MB`
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
<details>
|
||||
<summary><strong>Windows</strong></summary>
|
||||
|
||||
### Windows — Erforderlich
|
||||
|
||||
1. **Visual Studio Build Tools** (stellt MSVC-Linker und Windows SDK bereit):
|
||||
|
||||
```powershell
|
||||
winget install Microsoft.VisualStudio.2022.BuildTools
|
||||
```
|
||||
|
||||
Wähle während der Installation (oder über Visual Studio Installer) die Workload **"Desktop-Entwicklung mit C++"**.
|
||||
|
||||
2. **Rust-Toolchain:**
|
||||
|
||||
```powershell
|
||||
winget install Rustlang.Rustup
|
||||
```
|
||||
|
||||
Öffne nach der Installation ein neues Terminal und führe `rustup default stable` aus, um sicherzustellen, dass die stabile Toolchain aktiv ist.
|
||||
|
||||
3. **Überprüfe**, dass beide funktionieren:
|
||||
```powershell
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Windows — Optional
|
||||
|
||||
- **Docker Desktop** — nur erforderlich, wenn du die [Docker-Sandbox-Runtime](#aktuelle-runtime-unterstützung) verwendest (`runtime.kind = "docker"`). Installiere über `winget install Docker.DockerDesktop`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Linux / macOS</strong></summary>
|
||||
|
||||
### Linux / macOS — Erforderlich
|
||||
|
||||
1. **Essentielle Build-Tools:**
|
||||
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
|
||||
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
|
||||
- **macOS:** Installiere Xcode Command Line Tools: `xcode-select --install`
|
||||
|
||||
2. **Rust-Toolchain:**
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
Siehe [rustup.rs](https://rustup.rs) für Details.
|
||||
|
||||
3. **Überprüfe:**
|
||||
```bash
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Linux / macOS — Optional
|
||||
|
||||
- **Docker** — nur erforderlich, wenn du die [Docker-Sandbox-Runtime](#aktuelle-runtime-unterstützung) verwendest (`runtime.kind = "docker"`).
|
||||
- **Linux (Debian/Ubuntu):** siehe [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
|
||||
- **Linux (Fedora/RHEL):** siehe [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
|
||||
- **macOS:** installiere Docker Desktop über [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
|
||||
|
||||
</details>
|
||||
|
||||
## Schnellstart
|
||||
|
||||
### Option 1: Automatisierte Einrichtung (empfohlen)
|
||||
|
||||
Das `bootstrap.sh`-Skript installiert Rust, klont ZeroClaw, kompiliert es und richtet deine anfängliche Entwicklungsumgebung ein:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
|
||||
```
|
||||
|
||||
Dies wird:
|
||||
|
||||
1. Rust installieren (falls nicht vorhanden)
|
||||
2. Das ZeroClaw-Repository klonen
|
||||
3. ZeroClaw im Release-Modus kompilieren
|
||||
4. `zeroclaw` in `~/.cargo/bin/` installieren
|
||||
5. Die Standard-Workspace-Struktur in `~/.zeroclaw/workspace/` erstellen
|
||||
6. Eine Startkonfigurationsdatei `~/.zeroclaw/workspace/config.toml` generieren
|
||||
|
||||
Nach dem Bootstrap lade deine Shell neu oder führe `source ~/.cargo/env` aus, um den `zeroclaw`-Befehl global zu verwenden.
|
||||
|
||||
### Option 2: Manuelle Installation
|
||||
|
||||
<details>
|
||||
<summary><strong>Klicke, um die manuellen Installationsschritte zu sehen</strong></summary>
|
||||
|
||||
```bash
|
||||
# 1. Klone das Repository
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# 2. Kompiliere im Release-Modus
|
||||
cargo build --release --locked
|
||||
|
||||
# 3. Installiere das Binary
|
||||
cargo install --path . --locked
|
||||
|
||||
# 4. Initialisiere den Workspace
|
||||
zeroclaw init
|
||||
|
||||
# 5. Überprüfe die Installation
|
||||
zeroclaw --version
|
||||
zeroclaw status
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Nach der Installation
|
||||
|
||||
Nach der Installation (via Bootstrap oder manuell) solltest du sehen:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/
|
||||
├── config.toml # Hauptkonfiguration
|
||||
├── .pairing # Pairing-Geheimnisse (beim ersten Start generiert)
|
||||
├── logs/ # Daemon/Agent-Logs
|
||||
├── skills/ # Benutzerdefinierte Skills
|
||||
└── memory/ # Konversationskontext-Speicherung
|
||||
```
|
||||
|
||||
**Nächste Schritte:**
|
||||
|
||||
1. Konfiguriere deine KI-Provider in `~/.zeroclaw/workspace/config.toml`
|
||||
2. Sieh dir die [Konfigurationsreferenz](docs/config-reference.md) für erweiterte Optionen an
|
||||
3. Starte den Agent: `zeroclaw agent start`
|
||||
4. Teste über deinen bevorzugten Channel (siehe [Channel-Referenz](docs/channels-reference.md))
|
||||
|
||||
## Konfiguration
|
||||
|
||||
Bearbeite `~/.zeroclaw/workspace/config.toml`, um Provider, Channels und Systemverhalten zu konfigurieren.
|
||||
|
||||
### Schnelle Konfigurationsreferenz
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@bot:matrix.org"
|
||||
password = "..."
|
||||
|
||||
[memory]
|
||||
kind = "markdown" # oder "sqlite" oder "none"
|
||||
|
||||
[runtime]
|
||||
kind = "native" # oder "docker" (erfordert Docker)
|
||||
```
|
||||
|
||||
**Vollständige Referenzdokumente:**
|
||||
|
||||
- [Konfigurationsreferenz](docs/config-reference.md) — alle Einstellungen, Validierungen, Standardwerte
|
||||
- [Provider-Referenz](docs/providers-reference.md) — KI-Provider-spezifische Konfigurationen
|
||||
- [Channel-Referenz](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord und mehr
|
||||
- [Betrieb](docs/operations-runbook.md) — Produktionsüberwachung, Secret-Rotation, Skalierung
|
||||
|
||||
### Aktuelle Runtime-Unterstützung
|
||||
|
||||
ZeroClaw unterstützt zwei Code-Ausführungs-Backends:
|
||||
|
||||
- **`native`** (Standard) — direkte Prozessausführung, schnellster Pfad, ideal für vertrauenswürdige Umgebungen
|
||||
- **`docker`** — vollständige Container-Isolierung, gehärtete Sicherheitsrichtlinien, erfordert Docker
|
||||
|
||||
Verwende `runtime.kind = "docker"`, wenn du striktes Sandboxing oder Netzwerkisolierung benötigst. Siehe [Konfigurationsreferenz](docs/config-reference.md#runtime) für vollständige Details.
|
||||
|
||||
## Befehle
|
||||
|
||||
```bash
|
||||
# Workspace-Verwaltung
|
||||
zeroclaw init # Initialisiert einen neuen Workspace
|
||||
zeroclaw status # Zeigt Daemon/Agent-Status
|
||||
zeroclaw config validate # Überprüft config.toml Syntax und Werte
|
||||
|
||||
# Daemon-Verwaltung
|
||||
zeroclaw daemon start # Startet den Daemon im Hintergrund
|
||||
zeroclaw daemon stop # Stoppt den laufenden Daemon
|
||||
zeroclaw daemon restart # Startet den Daemon neu (Config-Neuladen)
|
||||
zeroclaw daemon logs # Zeigt Daemon-Logs
|
||||
|
||||
# Agent-Verwaltung
|
||||
zeroclaw agent start # Startet den Agent (erfordert laufenden Daemon)
|
||||
zeroclaw agent stop # Stoppt den Agent
|
||||
zeroclaw agent restart # Startet den Agent neu (Config-Neuladen)
|
||||
|
||||
# Pairing-Operationen
|
||||
zeroclaw pairing init # Generiert ein neues Pairing-Geheimnis
|
||||
zeroclaw pairing rotate # Rotiert das bestehende Pairing-Geheimnis
|
||||
|
||||
# Tunneling (für öffentliche Exposition)
|
||||
zeroclaw tunnel start # Startet einen Tunnel zum lokalen Daemon
|
||||
zeroclaw tunnel stop # Stoppt den aktiven Tunnel
|
||||
|
||||
# Diagnose
|
||||
zeroclaw doctor # Führt System-Gesundheitsprüfungen durch
|
||||
zeroclaw version # Zeigt Version und Build-Informationen
|
||||
```
|
||||
|
||||
Siehe [Befehlsreferenz](docs/commands-reference.md) für vollständige Optionen und Beispiele.
|
||||
|
||||
## Architektur
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Channels (Trait) │
|
||||
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Agent-Orchestrator │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Routing │ │ Kontext │ │ Ausführung │ │
|
||||
│ │ Nachricht │ │ Speicher │ │ Werkzeug │ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Provider │ │ Speicher │ │ Werkzeuge │
|
||||
│ (Trait) │ │ (Trait) │ │ (Trait) │
|
||||
├──────────────┤ ├──────────────┤ ├──────────────┤
|
||||
│ Anthropic │ │ Markdown │ │ Filesystem │
|
||||
│ OpenAI │ │ SQLite │ │ Bash │
|
||||
│ Gemini │ │ None │ │ Web Fetch │
|
||||
│ Ollama │ │ Custom │ │ Custom │
|
||||
│ Custom │ └──────────────┘ └──────────────┘
|
||||
└──────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Runtime (Trait) │
|
||||
│ Native │ Docker │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Schlüsselprinzipien:**
|
||||
|
||||
- Alles ist ein **Trait** — Provider, Channels, Tools, Speicher, Tunnel
|
||||
- Channels rufen den Orchestrator auf; der Orchestrator ruft Provider + Tools auf
|
||||
- Das Speichersystem verwaltet Konversationskontext (Markdown, SQLite, oder keiner)
|
||||
- Das Runtime abstrahiert Code-Ausführung (nativ oder Docker)
|
||||
- Kein Provider-Lock-in — tausche Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama ohne Code-Änderungen
|
||||
|
||||
Siehe [Architektur-Dokumentation](docs/architecture.svg) für detaillierte Diagramme und Implementierungsdetails.
|
||||
|
||||
## Beispiele
|
||||
|
||||
### Telegram-Bot
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
allowed_users = [987654321] # Deine Telegram-Benutzer-ID
|
||||
```
|
||||
|
||||
Starte den Daemon + Agent, dann sende eine Nachricht an deinen Bot auf Telegram:
|
||||
|
||||
```
|
||||
/start
|
||||
Hallo! Könntest du mir helfen, ein Python-Skript zu schreiben?
|
||||
```
|
||||
|
||||
Der Bot antwortet mit KI-generiertem Code, führt Tools auf Anfrage aus und behält den Konversationskontext.
|
||||
|
||||
### Matrix (Ende-zu-Ende-Verschlüsselung)
|
||||
|
||||
```toml
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@zeroclaw:matrix.org"
|
||||
password = "..."
|
||||
device_name = "zeroclaw-prod"
|
||||
e2ee_enabled = true
|
||||
```
|
||||
|
||||
Lade `@zeroclaw:matrix.org` in einen verschlüsselten Raum ein, und der Bot wird mit vollständiger Verschlüsselung antworten. Siehe [Matrix E2EE-Leitfaden](docs/matrix-e2ee-guide.md) für Geräteverifizierungs-Setup.
|
||||
|
||||
### Multi-Provider
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
enabled = true
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
enabled = true
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[orchestrator]
|
||||
default_provider = "anthropic"
|
||||
fallback_providers = ["openai"] # Failover bei Provider-Fehler
|
||||
```
|
||||
|
||||
Wenn Anthropic fehlschlägt oder Rate-Limit erreicht, wechselt der Orchestrator automatisch zu OpenAI.
|
||||
|
||||
### Benutzerdefinierter Speicher
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "sqlite"
|
||||
path = "~/.zeroclaw/workspace/memory/conversations.db"
|
||||
retention_days = 90 # Automatische Bereinigung nach 90 Tagen
|
||||
```
|
||||
|
||||
Oder verwende Markdown für menschenlesbaren Speicher:
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "markdown"
|
||||
path = "~/.zeroclaw/workspace/memory/"
|
||||
```
|
||||
|
||||
Siehe [Konfigurationsreferenz](docs/config-reference.md#memory) für alle Speicheroptionen.
|
||||
|
||||
## Provider-Unterstützung
|
||||
|
||||
| Provider | Status | API-Schlüssel | Beispielmodelle |
|
||||
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
|
||||
| **Anthropic** | ✅ Stabil | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
|
||||
| **OpenAI** | ✅ Stabil | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
|
||||
| **Google Gemini** | ✅ Stabil | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
|
||||
| **Ollama** | ✅ Stabil | N/A (lokal) | `llama3.3`, `qwen2.5`, `phi4` |
|
||||
| **Cerebras** | ✅ Stabil | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
|
||||
| **Groq** | ✅ Stabil | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
|
||||
| **Mistral** | 🚧 Geplant | `MISTRAL_API_KEY` | TBD |
|
||||
| **Cohere** | 🚧 Geplant | `COHERE_API_KEY` | TBD |
|
||||
|
||||
### Benutzerdefinierte Endpoints
|
||||
|
||||
ZeroClaw unterstützt OpenAI-kompatible Endpoints:
|
||||
|
||||
```toml
|
||||
[providers.custom]
|
||||
enabled = true
|
||||
api_key = "..."
|
||||
base_url = "https://api.your-llm-provider.com/v1"
|
||||
model = "your-model-name"
|
||||
```
|
||||
|
||||
Beispiel: verwende [LiteLLM](https://github.com/BerriAI/litellm) als Proxy, um auf jedes LLM über die OpenAI-Schnittstelle zuzugreifen.
|
||||
|
||||
Siehe [Provider-Referenz](docs/providers-reference.md) für vollständige Konfigurationsdetails.
|
||||
|
||||
## Channel-Unterstützung
|
||||
|
||||
| Channel | Status | Authentifizierung | Hinweise |
|
||||
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
|
||||
| **Telegram** | ✅ Stabil | Bot-Token | Vollständige Unterstützung inklusive Dateien, Bilder, Inline-Buttons |
|
||||
| **Matrix** | ✅ Stabil | Passwort oder Token | E2EE-Unterstützung mit Geräteverifizierung |
|
||||
| **Slack** | 🚧 Geplant | OAuth oder Bot-Token | Erfordert Workspace-Zugriff |
|
||||
| **Discord** | 🚧 Geplant | Bot-Token | Erfordert Guild-Berechtigungen |
|
||||
| **WhatsApp** | 🚧 Geplant | Twilio oder offizielle API | Erfordert Business-Konto |
|
||||
| **CLI** | ✅ Stabil | Keine | Direkte konversationelle Schnittstelle |
|
||||
| **Web** | 🚧 Geplant | API-Schlüssel oder OAuth | Browserbasierte Chat-Schnittstelle |
|
||||
|
||||
Siehe [Channel-Referenz](docs/channels-reference.md) für vollständige Konfigurationsanleitungen.
|
||||
|
||||
## Tool-Unterstützung
|
||||
|
||||
ZeroClaw bietet integrierte Tools für Code-Ausführung, Dateisystemzugriff und Web-Abruf:
|
||||
|
||||
| Tool | Beschreibung | Erforderliches Runtime |
|
||||
| -------------------- | --------------------------- | ----------------------------- |
|
||||
| **bash** | Führt Shell-Befehle aus | Nativ oder Docker |
|
||||
| **python** | Führt Python-Skripte aus | Python 3.8+ (nativ) oder Docker |
|
||||
| **javascript** | Führt Node.js-Code aus | Node.js 18+ (nativ) oder Docker |
|
||||
| **filesystem_read** | Liest Dateien | Nativ oder Docker |
|
||||
| **filesystem_write** | Schreibt Dateien | Nativ oder Docker |
|
||||
| **web_fetch** | Ruft Web-Inhalte ab | Nativ oder Docker |
|
||||
|
||||
### Ausführungssicherheit
|
||||
|
||||
- **Natives Runtime** — läuft als Benutzerprozess des Daemons, voller Dateisystemzugriff
|
||||
- **Docker-Runtime** — vollständige Container-Isolierung, separate Dateisysteme und Netzwerke
|
||||
|
||||
Konfiguriere die Ausführungsrichtlinie in `config.toml`:
|
||||
|
||||
```toml
|
||||
[runtime]
|
||||
kind = "docker"
|
||||
allowed_tools = ["bash", "python", "filesystem_read"] # Explizite Allowlist
|
||||
```
|
||||
|
||||
Siehe [Konfigurationsreferenz](docs/config-reference.md#runtime) für vollständige Sicherheitsoptionen.
|
||||
|
||||
## Deployment
|
||||
|
||||
### Lokales Deployment (Entwicklung)
|
||||
|
||||
```bash
|
||||
zeroclaw daemon start
|
||||
zeroclaw agent start
|
||||
```
|
||||
|
||||
### Server-Deployment (Produktion)
|
||||
|
||||
Verwende systemd, um Daemon und Agent als Dienste zu verwalten:
|
||||
|
||||
```bash
|
||||
# Installiere das Binary
|
||||
cargo install --path . --locked
|
||||
|
||||
# Konfiguriere den Workspace
|
||||
zeroclaw init
|
||||
|
||||
# Erstelle systemd-Dienstdateien
|
||||
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
|
||||
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
|
||||
|
||||
# Aktiviere und starte die Dienste
|
||||
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
|
||||
sudo systemctl start zeroclaw-daemon zeroclaw-agent
|
||||
|
||||
# Überprüfe den Status
|
||||
sudo systemctl status zeroclaw-daemon
|
||||
sudo systemctl status zeroclaw-agent
|
||||
```
|
||||
|
||||
Siehe [Netzwerk-Deployment-Leitfaden](docs/network-deployment.md) für vollständige Produktions-Deployment-Anleitungen.
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# Baue das Image
|
||||
docker build -t zeroclaw:latest .
|
||||
|
||||
# Führe den Container aus
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-v ~/.zeroclaw/workspace:/workspace \
|
||||
-e ANTHROPIC_API_KEY=sk-ant-... \
|
||||
zeroclaw:latest
|
||||
```
|
||||
|
||||
Siehe [`Dockerfile`](Dockerfile) für Build-Details und Konfigurationsoptionen.
|
||||
|
||||
### Edge-Hardware
|
||||
|
||||
ZeroClaw ist für den Betrieb auf Low-Power-Hardware konzipiert:
|
||||
|
||||
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, einzelner ARMv8-Kern, < $5 Hardware-Kosten
|
||||
- **Raspberry Pi 4/5** — 1 GB+ RAM, Multi-Core, ideal für gleichzeitige Workloads
|
||||
- **Orange Pi Zero 2** — ~512 MB RAM, Quad-Core ARMv8, Ultra-Low-Cost
|
||||
- **x86 SBCs (Intel N100)** — 4-8 GB RAM, schnelle Builds, nativer Docker-Support
|
||||
|
||||
Siehe [Hardware-Leitfaden](docs/hardware/README.md) für gerätespezifische Einrichtungsanleitungen.
|
||||
|
||||
## Tunneling (Öffentliche Exposition)
|
||||
|
||||
Exponiere deinen lokalen ZeroClaw-Daemon über sichere Tunnel zum öffentlichen Netzwerk:
|
||||
|
||||
```bash
|
||||
zeroclaw tunnel start --provider cloudflare
|
||||
```
|
||||
|
||||
Unterstützte Tunnel-Provider:
|
||||
|
||||
- **Cloudflare Tunnel** — kostenloses HTTPS, keine Port-Exposition, Multi-Domain-Support
|
||||
- **Ngrok** — schnelle Einrichtung, benutzerdefinierte Domains (kostenpflichtiger Plan)
|
||||
- **Tailscale** — privates Mesh-Netzwerk, kein öffentlicher Port
|
||||
|
||||
Siehe [Konfigurationsreferenz](docs/config-reference.md#tunnel) für vollständige Konfigurationsoptionen.
|
||||
|
||||
## Sicherheit
|
||||
|
||||
ZeroClaw implementiert mehrere Sicherheitsebenen:
|
||||
|
||||
### Pairing
|
||||
|
||||
Der Daemon generiert beim ersten Start ein Pairing-Geheimnis, das in `~/.zeroclaw/workspace/.pairing` gespeichert wird. Clients (Agent, CLI) müssen dieses Geheimnis präsentieren, um eine Verbindung herzustellen.
|
||||
|
||||
```bash
|
||||
zeroclaw pairing rotate # Generiert ein neues Geheimnis und erklärt das alte für ungültig
|
||||
```
|
||||
|
||||
### Sandboxing
|
||||
|
||||
- **Docker-Runtime** — vollständige Container-Isolierung mit separaten Dateisystemen und Netzwerken
|
||||
- **Natives Runtime** — läuft als Benutzerprozess, standardmäßig auf Workspace beschränkt
|
||||
|
||||
### Allowlists
|
||||
|
||||
Channels können den Zugriff nach Benutzer-ID einschränken:
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
allowed_users = [123456789, 987654321] # Explizite Allowlist
|
||||
```
|
||||
|
||||
### Verschlüsselung
|
||||
|
||||
- **Matrix E2EE** — vollständige Ende-zu-Ende-Verschlüsselung mit Geräteverifizierung
|
||||
- **TLS-Transport** — der gesamte API- und Tunnel-Verkehr verwendet HTTPS/TLS
|
||||
|
||||
Siehe [Sicherheitsdokumentation](docs/security/README.md) für vollständige Richtlinien und Praktiken.
|
||||
|
||||
## Observability
|
||||
|
||||
ZeroClaw protokolliert standardmäßig in `~/.zeroclaw/workspace/logs/`. Logs werden nach Komponente gespeichert:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/logs/
|
||||
├── daemon.log # Daemon-Logs (Start, API-Anfragen, Fehler)
|
||||
├── agent.log # Agent-Logs (Nachrichten-Routing, Tool-Ausführung)
|
||||
├── telegram.log # Kanalspezifische Logs (falls aktiviert)
|
||||
└── matrix.log # Kanalspezifische Logs (falls aktiviert)
|
||||
```
|
||||
|
||||
### Logging-Konfiguration
|
||||
|
||||
```toml
|
||||
[logging]
|
||||
level = "info" # debug, info, warn, error
|
||||
path = "~/.zeroclaw/workspace/logs/"
|
||||
rotation = "daily" # daily, hourly, size
|
||||
max_size_mb = 100 # Für größenbasierte Rotation
|
||||
retention_days = 30 # Automatische Bereinigung nach N Tagen
|
||||
```
|
||||
|
||||
Siehe [Konfigurationsreferenz](docs/config-reference.md#logging) für alle Logging-Optionen.
|
||||
|
||||
### Metriken (Geplant)
|
||||
|
||||
Prometheus-Metrik-Unterstützung für Produktionsüberwachung kommt bald. Verfolgung in [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
|
||||
|
||||
## Skills
|
||||
|
||||
ZeroClaw unterstützt benutzerdefinierte Skills — wiederverwendbare Module, die die Systemfähigkeiten erweitern.
|
||||
|
||||
### Skill-Definition
|
||||
|
||||
Skills werden in `~/.zeroclaw/workspace/skills/<skill-name>/` mit dieser Struktur gespeichert:
|
||||
|
||||
```
|
||||
skills/
|
||||
└── my-skill/
|
||||
├── skill.toml # Skill-Metadaten (Name, Beschreibung, Abhängigkeiten)
|
||||
├── prompt.md # System-Prompt für die KI
|
||||
└── tools/ # Optionale benutzerdefinierte Tools
|
||||
└── my_tool.py
|
||||
```
|
||||
|
||||
### Skill-Beispiel
|
||||
|
||||
```toml
|
||||
# skills/web-research/skill.toml
|
||||
[skill]
|
||||
name = "web-research"
|
||||
description = "Sucht im Web und fasst Ergebnisse zusammen"
|
||||
version = "1.0.0"
|
||||
|
||||
[dependencies]
|
||||
tools = ["web_fetch", "bash"]
|
||||
```
|
||||
|
||||
```markdown
|
||||
<!-- skills/web-research/prompt.md -->
|
||||
|
||||
Du bist ein Forschungsassistent. Wenn du gebeten wirst, etwas zu recherchieren:
|
||||
|
||||
1. Verwende web_fetch, um den Inhalt abzurufen
|
||||
2. Fasse die Ergebnisse in einem leicht lesbaren Format zusammen
|
||||
3. Zitiere die Quellen mit URLs
|
||||
```
|
||||
|
||||
### Skill-Verwendung
|
||||
|
||||
Skills werden beim Agent-Start automatisch geladen. Referenziere sie nach Namen in Konversationen:
|
||||
|
||||
```
|
||||
Benutzer: Verwende den Web-Research-Skill, um die neuesten KI-Nachrichten zu finden
|
||||
Bot: [lädt den Web-Research-Skill, führt web_fetch aus, fasst Ergebnisse zusammen]
|
||||
```
|
||||
|
||||
Siehe Abschnitt [Skills](#skills) für vollständige Skill-Erstellungsanleitungen.
|
||||
|
||||
## Open Skills
|
||||
|
||||
ZeroClaw unterstützt [Open Skills](https://github.com/openagents-com/open-skills) — ein modulares und provider-agnostisches System zur Erweiterung von KI-Agenten-Fähigkeiten.
|
||||
|
||||
### Open Skills aktivieren
|
||||
|
||||
```toml
|
||||
[skills]
|
||||
open_skills_enabled = true
|
||||
# open_skills_dir = "/path/to/open-skills" # optional
|
||||
```
|
||||
|
||||
Du kannst auch zur Laufzeit mit `ZEROCLAW_OPEN_SKILLS_ENABLED` und `ZEROCLAW_OPEN_SKILLS_DIR` überschreiben.
|
||||
|
||||
## Entwicklung
|
||||
|
||||
```bash
|
||||
cargo build # Entwicklungs-Build
|
||||
cargo build --release # Release-Build (codegen-units=1, funktioniert auf allen Geräten einschließlich Raspberry Pi)
|
||||
cargo build --profile release-fast # Schnellerer Build (codegen-units=8, erfordert 16 GB+ RAM)
|
||||
cargo test # Führt die vollständige Test-Suite aus
|
||||
cargo clippy --locked --all-targets -- -D clippy::correctness
|
||||
cargo fmt # Formatierung
|
||||
|
||||
# Führe den SQLite vs Markdown Vergleichs-Benchmark aus
|
||||
cargo test --test memory_comparison -- --nocapture
|
||||
```
|
||||
|
||||
### Pre-push-Hook
|
||||
|
||||
Ein Git-Hook führt `cargo fmt --check`, `cargo clippy -- -D warnings`, und `cargo test` vor jedem Push aus. Aktiviere ihn einmal:
|
||||
|
||||
```bash
|
||||
git config core.hooksPath .githooks
|
||||
```
|
||||
|
||||
### Build-Fehlerbehebung (OpenSSL-Fehler unter Linux)
|
||||
|
||||
Wenn du auf einen `openssl-sys`-Build-Fehler stößt, synchronisiere Abhängigkeiten und kompiliere mit dem Lockfile des Repositories neu:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
cargo build --release --locked
|
||||
cargo install --path . --force --locked
|
||||
```
|
||||
|
||||
ZeroClaw ist so konfiguriert, dass es `rustls` für HTTP/TLS-Abhängigkeiten verwendet; `--locked` hält den transitiven Graphen in sauberen Umgebungen deterministisch.
|
||||
|
||||
Um den Hook zu überspringen, wenn du während der Entwicklung einen schnellen Push benötigst:
|
||||
|
||||
```bash
|
||||
git push --no-verify
|
||||
```
|
||||
|
||||
## Zusammenarbeit & Docs
|
||||
|
||||
Beginne mit dem Dokumentations-Hub für eine Aufgaben-basierte Karte:
|
||||
|
||||
- Dokumentations-Hub: [`docs/README.md`](docs/README.md)
|
||||
- Vereinigtes Docs-Inhaltsverzeichnis: [`docs/SUMMARY.md`](docs/SUMMARY.md)
|
||||
- Befehlsreferenz: [`docs/commands-reference.md`](docs/commands-reference.md)
|
||||
- Konfigurationsreferenz: [`docs/config-reference.md`](docs/config-reference.md)
|
||||
- Provider-Referenz: [`docs/providers-reference.md`](docs/providers-reference.md)
|
||||
- Channel-Referenz: [`docs/channels-reference.md`](docs/channels-reference.md)
|
||||
- Betriebshandbuch: [`docs/operations-runbook.md`](docs/operations-runbook.md)
|
||||
- Fehlerbehebung: [`docs/troubleshooting.md`](docs/troubleshooting.md)
|
||||
- Docs-Inventar/Klassifizierung: [`docs/docs-inventory.md`](docs/docs-inventory.md)
|
||||
- PR/Issue-Triage-Snapshot (Stand 18. Feb. 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
|
||||
|
||||
Hauptzusammenarbeitsreferenzen:
|
||||
|
||||
- Dokumentations-Hub: [docs/README.md](docs/README.md)
|
||||
- Dokumentationsvorlage: [docs/doc-template.md](docs/doc-template.md)
|
||||
- Dokumentationsänderungs-Checkliste: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- Channel-Konfigurationsreferenz: [docs/channels-reference.md](docs/channels-reference.md)
|
||||
- Matrix-verschlüsselte Raum-Operationen: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
|
||||
- Beitragsleitfaden: [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- PR-Workflow-Richtlinie: [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- Reviewer-Playbook (Triage + Tiefenreview): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- Eigentums- und CI-Triage-Map: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Sicherheits-Offenlegungsrichtlinie: [SECURITY.md](SECURITY.md)
|
||||
|
||||
Für Deployment und Runtime-Betrieb:
|
||||
|
||||
- Netzwerk-Deployment-Leitfaden: [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- Proxy-Agent-Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
|
||||
## ZeroClaw unterstützen
|
||||
|
||||
Wenn ZeroClaw deine Arbeit hilft und du die kontinuierliche Entwicklung unterstützen möchtest, kannst du hier spenden:
|
||||
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Kauf mir einen Kaffee" /></a>
|
||||
|
||||
### 🙏 Besonderer Dank
|
||||
|
||||
Ein herzliches Dankeschön an die Gemeinschaften und Institutionen, die diese Open-Source-Arbeit inspirieren und unterstützen:
|
||||
|
||||
- **Harvard University** — für die Förderung intellektueller Neugier und das Erweitern der Grenzen des Möglichen.
|
||||
- **MIT** — für das Eintreten für offenes Wissen, Open Source und die Überzeugung, dass Technologie für alle zugänglich sein sollte.
|
||||
- **Sundai Club** — für die Gemeinschaft, die Energie und den unermüdlichen Willen, Dinge zu bauen, die zählen.
|
||||
- **Die Welt und Darüber Hinaus** 🌍✨ — an jeden Mitwirkenden, Träumer und Erbauer da draußen, der Open Source zu einer Kraft für das Gute macht. Das ist für dich.
|
||||
|
||||
Wir bauen in Open Source, weil die besten Ideen von überall kommen. Wenn du das liest, bist du Teil davon. Willkommen. 🦀❤️
|
||||
|
||||
## ⚠️ Offizielles Repository und Fälschungswarnung
|
||||
|
||||
**Dies ist das einzige offizielle ZeroClaw-Repository:**
|
||||
|
||||
> <https://github.com/zeroclaw-labs/zeroclaw>
|
||||
|
||||
Jedes andere Repository, Organisation, Domain oder Paket, das behauptet "ZeroClaw" zu sein oder eine Verbindung zu ZeroClaw Labs zu implizieren, ist **nicht autorisiert und nicht mit diesem Projekt verbunden**. Bekannte nicht autorisierte Forks werden in [TRADEMARK.md](TRADEMARK.md) aufgeführt.
|
||||
|
||||
Wenn du auf Fälschung oder Markenmissbrauch stößt, bitte [öffne ein Issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
|
||||
|
||||
---
|
||||
|
||||
## Lizenz
|
||||
|
||||
ZeroClaw ist doppelt lizenziert für maximale Offenheit und Contributorschutz:
|
||||
|
||||
| Lizenz | Anwendungsfälle |
|
||||
| ---------------------------- | ------------------------------------------------------------ |
|
||||
| [MIT](LICENSE-MIT) | Open-Source, Forschung, akademisch, persönliche Nutzung |
|
||||
| [Apache 2.0](LICENSE-APACHE) | Patentschutz, institutionell, kommerzielles Deployment |
|
||||
|
||||
Du kannst eine der beiden Lizenzen wählen. **Contributors gewähren automatisch Rechte unter beiden** — siehe [CLA.md](CLA.md) für die vollständige Contributor-Vereinbarung.
|
||||
|
||||
### Marke
|
||||
|
||||
Der Name **ZeroClaw** und das Logo sind eingetragene Marken von ZeroClaw Labs. Diese Lizenz gewährt keine Erlaubnis, sie zu verwenden, um Befürwortung oder Verbindung zu implizieren. Siehe [TRADEMARK.md](TRADEMARK.md) für erlaubte und verbotene Verwendungen.
|
||||
|
||||
### Contributorschutz
|
||||
|
||||
- Du **behältst das Urheberrecht** an deinen Beiträgen
|
||||
- **Patentgewährung** (Apache 2.0) schützt dich vor Patentansprüchen anderer Contributors
|
||||
- Deine Beiträge werden **dauerhaft zugeschrieben** in der Commit-Historie und [NOTICE](NOTICE)
|
||||
- Keine Markenrechte werden durch Beiträge übertragen
|
||||
|
||||
## Mitwirken
|
||||
|
||||
Siehe [CONTRIBUTING.md](CONTRIBUTING.md) und [CLA.md](CLA.md). Implementiere einen Trait, reiche eine PR ein:
|
||||
|
||||
- CI-Workflow-Leitfaden: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Neuer `Provider` → `src/providers/`
|
||||
- Neuer `Channel` → `src/channels/`
|
||||
- Neuer `Observer` → `src/observability/`
|
||||
- Neues `Tool` → `src/tools/`
|
||||
- Neuer `Memory` → `src/memory/`
|
||||
- Neuer `Tunnel` → `src/tunnel/`
|
||||
- Neuer `Skill` → `~/.zeroclaw/workspace/skills/<n>/`
|
||||
|
||||
---
|
||||
|
||||
**ZeroClaw** — Null Overhead. Null Kompromiss. Deploy überall. Tausche alles. 🦀
|
||||
|
||||
## Stern-Historie
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
<img alt="Stern-Historie-Diagramm" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
+178
@@ -0,0 +1,178 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Μηδενικό overhead. Μηδενικός συμβιβασμός. 100% Rust. 100% Αγνωστικιστικό.</strong><br>
|
||||
⚡️ <strong>Εκτελείται σε hardware $10 με <5MB RAM: Αυτό είναι 99% λιγότερη μνήμη από το OpenClaw και 98% φθηνότερο από ένα Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Γλώσσες:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
> **📝 Σημείωση:** Αυτό είναι ένα συνοπτικό README στα ελληνικά. Για πλήρη τεκμηρίωση, ανατρέξτε στο [αγγλικό README](README.md). Οι σύνδεσμοι τεκμηρίωσης παραπέμπουν στην αγγλική τεκμηρίωση.
|
||||
|
||||
## Τι είναι το ZeroClaw;
|
||||
|
||||
Το ZeroClaw είναι μια ελαφριά, μεταβλητή και επεκτάσιμη υποδομή AI βοηθού χτισμένη σε Rust. Συνδέει διάφορους παρόχους LLM (Anthropic, OpenAI, Google, Ollama, κλπ.) μέσω μιας ενοποιημένης διεπαφής και υποστηρίζει πολλαπλά κανάλια (Telegram, Matrix, CLI, κλπ.).
|
||||
|
||||
### Κύρια Χαρακτηριστικά
|
||||
|
||||
- **🦀 Γραμμένο σε Rust**: Υψηλή απόδοση, ασφάλεια μνήμης και αφαιρέσεις μηδενικού κόστους
|
||||
- **🔌 Αγνωστικιστικό προς παρόχους**: Υποστηρίζει OpenAI, Anthropic, Google Gemini, Ollama και άλλους
|
||||
- **📱 Πολυκάναλο**: Telegram, Matrix (με E2EE), CLI και άλλα
|
||||
- **🧠 Προσαρμόσιμη μνήμη**: SQLite και Markdown backends
|
||||
- **🛠️ Επεκτάσιμα εργαλεία**: Προσθέστε εύκολα προσαρμοσμένα εργαλεία
|
||||
- **🔒 Ασφάλεια πρώτα**: Αντίστροφος proxy, σχεδιασμός προσανατολισμένος στο απόρρητο
|
||||
|
||||
---
|
||||
|
||||
## Γρήγορη Εκκίνηση
|
||||
|
||||
### Απαιτήσεις
|
||||
|
||||
- Rust 1.70+
|
||||
- Ένα κλειδί API παρόχου LLM (Anthropic, OpenAI, κλπ.)
|
||||
|
||||
### Εγκατάσταση
|
||||
|
||||
```bash
|
||||
# Κλωνοποιήστε το repository
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# Κατασκευή
|
||||
cargo build --release
|
||||
|
||||
# Εκτέλεση
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Με Docker
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ρύθμιση
|
||||
|
||||
Το ZeroClaw χρησιμοποιεί ένα αρχείο ρύθμισης YAML. Από προεπιλογή, αναζητά το `config.yaml`.
|
||||
|
||||
```yaml
|
||||
# Προεπιλεγμένος πάροχος
|
||||
provider: anthropic
|
||||
|
||||
# Ρύθμιση παρόχων
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# Ρύθμιση μνήμης
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# Ρύθμιση καναλιών
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Τεκμηρίωση
|
||||
|
||||
Για λεπτομερή τεκμηρίωση, δείτε:
|
||||
|
||||
- [Κόμβος Τεκμηρίωσης](docs/README.md)
|
||||
- [Αναφορά Εντολών](docs/commands-reference.md)
|
||||
- [Αναφορά Παρόχων](docs/providers-reference.md)
|
||||
- [Αναφορά Καναλιών](docs/channels-reference.md)
|
||||
- [Αναφορά Ρυθμίσεων](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Συνεισφορά
|
||||
|
||||
Οι συνεισφορές είναι ευπρόσδεκτες! Παρακαλώ διαβάστε τον [Οδηγό Συνεισφοράς](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
## Άδεια
|
||||
|
||||
Αυτό το έργο έχει διπλή άδεια:
|
||||
|
||||
- MIT License
|
||||
- Apache License, έκδοση 2.0
|
||||
|
||||
Δείτε τα [LICENSE-APACHE](LICENSE-APACHE) και [LICENSE-MIT](LICENSE-MIT) για λεπτομέρειες.
|
||||
|
||||
---
|
||||
|
||||
## Κοινότητα
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## Χορηγοί
|
||||
|
||||
Αν το ZeroClaw είναι χρήσιμο για εσάς, παρακαλώ σκεφτείτε να μας αγοράσετε έναν καφέ:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+914
@@ -0,0 +1,914 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Cero sobrecarga. Cero compromiso. 100% Rust. 100% Agnóstico.</strong><br>
|
||||
⚡️ <strong>¡Ejecuta en hardware de $10 con <5MB de RAM: ¡Eso es 99% menos memoria que OpenClaw y 98% más barato que un Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit: r/zeroclawlabs" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
Construido por estudiantes y miembros de las comunidades de Harvard, MIT y Sundai.Club.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Idiomas:</strong><a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#inicio-rápido">Inicio Rápido</a> |
|
||||
<a href="bootstrap.sh">Configuración con Un Clic</a> |
|
||||
<a href="docs/README.md">Hub de Documentación</a> |
|
||||
<a href="docs/SUMMARY.md">Tabla de Contenidos de Documentación</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Accesos rápidos:</strong>
|
||||
<a href="docs/reference/README.md">Referencia</a> ·
|
||||
<a href="docs/operations/README.md">Operaciones</a> ·
|
||||
<a href="docs/troubleshooting.md">Solución de Problemas</a> ·
|
||||
<a href="docs/security/README.md">Seguridad</a> ·
|
||||
<a href="docs/hardware/README.md">Hardware</a> ·
|
||||
<a href="docs/contributing/README.md">Contribuir</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Infraestructura de asistente AI rápida, ligera y completamente autónoma</strong><br />
|
||||
Despliega en cualquier lugar. Intercambia cualquier cosa.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
ZeroClaw es el <strong>sistema operativo de runtime</strong> para flujos de trabajo de agentes — una infraestructura que abstrae modelos, herramientas, memoria y ejecución para construir agentes una vez y ejecutarlos en cualquier lugar.
|
||||
</p>
|
||||
|
||||
<p align="center"><code>Arquitectura basada en traits · runtime seguro por defecto · proveedor/canal/herramienta intercambiables · todo es conectable</code></p>
|
||||
|
||||
### 📢 Anuncios
|
||||
|
||||
Usa esta tabla para avisos importantes (cambios de compatibilidad, avisos de seguridad, ventanas de mantenimiento y bloqueos de versión).
|
||||
|
||||
| Fecha (UTC) | Nivel | Aviso | Acción |
|
||||
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 2026-02-19 | _Crítico_ | **No estamos afiliados** con `openagen/zeroclaw` o `zeroclaw.org`. El dominio `zeroclaw.org` apunta actualmente al fork `openagen/zeroclaw`, y este dominio/repositorio está suplantando nuestro sitio web/proyecto oficial. | No confíes en información, binarios, recaudaciones de fondos o anuncios de estas fuentes. Usa solo [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) y nuestras cuentas sociales verificadas. |
|
||||
| 2026-02-21 | _Importante_ | Nuestro sitio web oficial ahora está en línea: [zeroclawlabs.ai](https://zeroclawlabs.ai). Gracias por tu paciencia durante la espera. Todavía detectamos intentos de suplantación: no participes en ninguna actividad de inversión/financiamiento en nombre de ZeroClaw si no se publica a través de nuestros canales oficiales. | Usa [este repositorio](https://github.com/zeroclaw-labs/zeroclaw) como la única fuente de verdad. Sigue [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), y [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para actualizaciones oficiales. |
|
||||
| 2026-02-19 | _Importante_ | Anthropic actualizó los términos de uso de autenticación y credenciales el 2026-02-19. La autenticación OAuth (Free, Pro, Max) es exclusivamente para Claude Code y Claude.ai; el uso de tokens OAuth de Claude Free/Pro/Max en cualquier otro producto, herramienta o servicio (incluyendo Agent SDK) no está permitido y puede violar los Términos de Uso del Consumidor. | Por favor, evita temporalmente las integraciones OAuth de Claude Code para prevenir cualquier pérdida potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
|
||||
|
||||
### ✨ Características
|
||||
|
||||
- 🏎️ **Runtime Ligero por Defecto:** Los flujos de trabajo CLI comunes y comandos de estado se ejecutan dentro de un espacio de memoria de pocos megabytes en builds de producción.
|
||||
- 💰 **Despliegue Económico:** Diseñado para placas de bajo costo e instancias cloud pequeñas sin dependencias de runtime pesadas.
|
||||
- ⚡ **Inicios en Frío Rápidos:** El runtime Rust de binario único mantiene el inicio de comandos y demonios casi instantáneo para operaciones diarias.
|
||||
- 🌍 **Arquitectura Portátil:** Un flujo de trabajo de binario único en ARM, x86 y RISC-V con proveedor/canal/herramienta intercambiables.
|
||||
|
||||
### Por qué los equipos eligen ZeroClaw
|
||||
|
||||
- **Ligero por defecto:** binario Rust pequeño, inicio rápido, huella de memoria baja.
|
||||
- **Seguro por diseño:** emparejamiento, sandboxing estricto, listas permitidas explícitas, alcance de workspace.
|
||||
- **Completamente intercambiable:** los sistemas centrales son traits (proveedores, canales, herramientas, memoria, túneles).
|
||||
- **Sin lock-in de proveedor:** soporte de proveedor compatible con OpenAI + endpoints personalizados conectables.
|
||||
|
||||
## Instantánea de Benchmark (ZeroClaw vs OpenClaw, Reproducible)
|
||||
|
||||
Benchmark rápido en máquina local (macOS arm64, feb. 2026) normalizado para hardware edge de 0.8 GHz.
|
||||
|
||||
| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
|
||||
| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
|
||||
| **Lenguaje** | TypeScript | Python | Go | **Rust** |
|
||||
| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
|
||||
| **Inicio (núcleo 0.8 GHz)** | > 500s | > 30s | < 1s | **< 10ms** |
|
||||
| **Tamaño Binario** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
|
||||
| **Costo** | Mac Mini $599 | Linux SBC ~$50 | Placa Linux $10 | **Cualquier hardware $10** |
|
||||
|
||||
> Notas: Los resultados de ZeroClaw se miden en builds de producción usando `/usr/bin/time -l`. OpenClaw requiere el runtime Node.js (típicamente ~390 MB de sobrecarga de memoria adicional), mientras que NanoBot requiere el runtime Python. PicoClaw y ZeroClaw son binarios estáticos. Las cifras de RAM anteriores son memoria de runtime; los requisitos de compilación en tiempo de build son mayores.
|
||||
|
||||
<p align="center">
|
||||
<img src="zero-claw.jpeg" alt="Comparación ZeroClaw vs OpenClaw" width="800" />
|
||||
</p>
|
||||
|
||||
### Medición Local Reproducible
|
||||
|
||||
Las afirmaciones de benchmark pueden derivar a medida que el código y las toolchains evolucionan, así que siempre mide tu build actual localmente:
|
||||
|
||||
```bash
|
||||
cargo build --release
|
||||
ls -lh target/release/zeroclaw
|
||||
|
||||
/usr/bin/time -l target/release/zeroclaw --help
|
||||
/usr/bin/time -l target/release/zeroclaw status
|
||||
```
|
||||
|
||||
Ejemplo de muestra (macOS arm64, medido el 18 de febrero de 2026):
|
||||
|
||||
- Tamaño de binario release: `8.8M`
|
||||
- `zeroclaw --help`: tiempo real aprox `0.02s`, huella de memoria máxima ~`3.9 MB`
|
||||
- `zeroclaw status`: tiempo real aprox `0.01s`, huella de memoria máxima ~`4.1 MB`
|
||||
|
||||
## Requisitos Previos
|
||||
|
||||
<details>
|
||||
<summary><strong>Windows</strong></summary>
|
||||
|
||||
### Windows — Requerido
|
||||
|
||||
1. **Visual Studio Build Tools** (proporciona el linker MSVC y el Windows SDK):
|
||||
|
||||
```powershell
|
||||
winget install Microsoft.VisualStudio.2022.BuildTools
|
||||
```
|
||||
|
||||
Durante la instalación (o a través de Visual Studio Installer), selecciona la carga de trabajo **"Desarrollo de escritorio con C++"**.
|
||||
|
||||
2. **Toolchain Rust:**
|
||||
|
||||
```powershell
|
||||
winget install Rustlang.Rustup
|
||||
```
|
||||
|
||||
Después de la instalación, abre una nueva terminal y ejecuta `rustup default stable` para asegurar que la toolchain estable esté activa.
|
||||
|
||||
3. **Verifica** que ambos funcionan:
|
||||
```powershell
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Windows — Opcional
|
||||
|
||||
- **Docker Desktop** — requerido solo si usas el [runtime sandboxed Docker](#soporte-de-runtime-actual) (`runtime.kind = "docker"`). Instala vía `winget install Docker.DockerDesktop`.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><strong>Linux / macOS</strong></summary>
|
||||
|
||||
### Linux / macOS — Requerido
|
||||
|
||||
1. **Herramientas de compilación esenciales:**
|
||||
- **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
|
||||
- **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
|
||||
- **macOS:** Instala Xcode Command Line Tools: `xcode-select --install`
|
||||
|
||||
2. **Toolchain Rust:**
|
||||
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
Ver [rustup.rs](https://rustup.rs) para detalles.
|
||||
|
||||
3. **Verifica:**
|
||||
```bash
|
||||
rustc --version
|
||||
cargo --version
|
||||
```
|
||||
|
||||
### Linux / macOS — Opcional
|
||||
|
||||
- **Docker** — requerido solo si usas el [runtime sandboxed Docker](#soporte-de-runtime-actual) (`runtime.kind = "docker"`).
|
||||
- **Linux (Debian/Ubuntu):** ver [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
|
||||
- **Linux (Fedora/RHEL):** ver [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
|
||||
- **macOS:** instala Docker Desktop vía [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
|
||||
|
||||
</details>
|
||||
|
||||
## Inicio Rápido
|
||||
|
||||
### Opción 1: Configuración automatizada (recomendada)
|
||||
|
||||
El script `bootstrap.sh` instala Rust, clona ZeroClaw, lo compila, y configura tu entorno de desarrollo inicial:
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/bootstrap.sh | bash
|
||||
```
|
||||
|
||||
Esto:
|
||||
|
||||
1. Instalará Rust (si no está presente)
|
||||
2. Clonará el repositorio ZeroClaw
|
||||
3. Compilará ZeroClaw en modo release
|
||||
4. Instalará `zeroclaw` en `~/.cargo/bin/`
|
||||
5. Creará la estructura de workspace por defecto en `~/.zeroclaw/workspace/`
|
||||
6. Generará un archivo de configuración inicial `~/.zeroclaw/workspace/config.toml`
|
||||
|
||||
Después del bootstrap, recarga tu shell o ejecuta `source ~/.cargo/env` para usar el comando `zeroclaw` globalmente.
|
||||
|
||||
### Opción 2: Instalación manual
|
||||
|
||||
<details>
|
||||
<summary><strong>Clic para ver los pasos de instalación manual</strong></summary>
|
||||
|
||||
```bash
|
||||
# 1. Clona el repositorio
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# 2. Compila en release
|
||||
cargo build --release --locked
|
||||
|
||||
# 3. Instala el binario
|
||||
cargo install --path . --locked
|
||||
|
||||
# 4. Inicializa el workspace
|
||||
zeroclaw init
|
||||
|
||||
# 5. Verifica la instalación
|
||||
zeroclaw --version
|
||||
zeroclaw status
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Después de la instalación
|
||||
|
||||
Una vez instalado (vía bootstrap o manualmente), deberías ver:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/
|
||||
├── config.toml # Configuración principal
|
||||
├── .pairing # Secretos de emparejamiento (generado al primer inicio)
|
||||
├── logs/ # Logs de daemon/agent
|
||||
├── skills/ # Habilidades personalizadas
|
||||
└── memory/ # Almacenamiento de contexto conversacional
|
||||
```
|
||||
|
||||
**Siguientes pasos:**
|
||||
|
||||
1. Configura tus proveedores de AI en `~/.zeroclaw/workspace/config.toml`
|
||||
2. Revisa la [referencia de configuración](docs/config-reference.md) para opciones avanzadas
|
||||
3. Inicia el agente: `zeroclaw agent start`
|
||||
4. Prueba vía tu canal preferido (ver [referencia de canales](docs/channels-reference.md))
|
||||
|
||||
## Configuración
|
||||
|
||||
Edita `~/.zeroclaw/workspace/config.toml` para configurar proveedores, canales y comportamiento del sistema.
|
||||
|
||||
### Referencia de Configuración Rápida
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@bot:matrix.org"
|
||||
password = "..."
|
||||
|
||||
[memory]
|
||||
kind = "markdown" # o "sqlite" o "none"
|
||||
|
||||
[runtime]
|
||||
kind = "native" # o "docker" (requiere Docker)
|
||||
```
|
||||
|
||||
**Documentos de referencia completos:**
|
||||
|
||||
- [Referencia de Configuración](docs/config-reference.md) — todos los ajustes, validaciones, valores por defecto
|
||||
- [Referencia de Proveedores](docs/providers-reference.md) — configuraciones específicas de proveedores de AI
|
||||
- [Referencia de Canales](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord y más
|
||||
- [Operaciones](docs/operations-runbook.md) — monitoreo en producción, rotación de secretos, escalado
|
||||
|
||||
### Soporte de Runtime (actual)
|
||||
|
||||
ZeroClaw soporta dos backends de ejecución de código:
|
||||
|
||||
- **`native`** (por defecto) — ejecución de proceso directo, camino más rápido, ideal para entornos de confianza
|
||||
- **`docker`** — aislamiento completo de contenedor, políticas de seguridad reforzadas, requiere Docker
|
||||
|
||||
Usa `runtime.kind = "docker"` si necesitas sandboxing estricto o aislamiento de red. Ver [referencia de configuración](docs/config-reference.md#runtime) para detalles completos.
|
||||
|
||||
## Comandos
|
||||
|
||||
```bash
|
||||
# Gestión de workspace
|
||||
zeroclaw init # Inicializa un nuevo workspace
|
||||
zeroclaw status # Muestra estado de daemon/agent
|
||||
zeroclaw config validate # Verifica sintaxis y valores de config.toml
|
||||
|
||||
# Gestión de daemon
|
||||
zeroclaw daemon start # Inicia el daemon en segundo plano
|
||||
zeroclaw daemon stop # Detiene el daemon en ejecución
|
||||
zeroclaw daemon restart # Reinicia el daemon (recarga de config)
|
||||
zeroclaw daemon logs # Muestra logs del daemon
|
||||
|
||||
# Gestión de agent
|
||||
zeroclaw agent start # Inicia el agent (requiere daemon ejecutándose)
|
||||
zeroclaw agent stop # Detiene el agent
|
||||
zeroclaw agent restart # Reinicia el agent (recarga de config)
|
||||
|
||||
# Operaciones de emparejamiento
|
||||
zeroclaw pairing init # Genera un nuevo secreto de emparejamiento
|
||||
zeroclaw pairing rotate # Rota el secreto de emparejamiento existente
|
||||
|
||||
# Tunneling (para exposición pública)
|
||||
zeroclaw tunnel start # Inicia un tunnel hacia el daemon local
|
||||
zeroclaw tunnel stop # Detiene el tunnel activo
|
||||
|
||||
# Diagnóstico
|
||||
zeroclaw doctor # Ejecuta verificaciones de salud del sistema
|
||||
zeroclaw version # Muestra versión e información de build
|
||||
```
|
||||
|
||||
Ver [Referencia de Comandos](docs/commands-reference.md) para opciones y ejemplos completos.
|
||||
|
||||
## Arquitectura
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Canales (trait) │
|
||||
│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Orquestador Agent │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Ruteo │ │ Contexto │ │ Ejecución │ │
|
||||
│ │ Mensaje │ │ Memoria │ │ Herramienta│ │
|
||||
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||
└─────────────────────────┬───────────────────────────────────────┘
|
||||
│
|
||||
┌───────────────┼───────────────┐
|
||||
▼ ▼ ▼
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Proveedores │ │ Memoria │ │ Herramientas │
|
||||
│ (trait) │ │ (trait) │ │ (trait) │
|
||||
├──────────────┤ ├──────────────┤ ├──────────────┤
|
||||
│ Anthropic │ │ Markdown │ │ Filesystem │
|
||||
│ OpenAI │ │ SQLite │ │ Bash │
|
||||
│ Gemini │ │ None │ │ Web Fetch │
|
||||
│ Ollama │ │ Custom │ │ Custom │
|
||||
│ Custom │ └──────────────┘ └──────────────┘
|
||||
└──────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Runtime (trait) │
|
||||
│ Native │ Docker │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Principios clave:**
|
||||
|
||||
- Todo es un **trait** — proveedores, canales, herramientas, memoria, túneles
|
||||
- Los canales llaman al orquestador; el orquestador llama a proveedores + herramientas
|
||||
- El sistema de memoria gestiona contexto conversacional (markdown, SQLite, o ninguno)
|
||||
- El runtime abstrae la ejecución de código (nativo o Docker)
|
||||
- Sin lock-in de proveedor — intercambia Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama sin cambios de código
|
||||
|
||||
Ver [documentación de arquitectura](docs/architecture.svg) para diagramas detallados y detalles de implementación.
|
||||
|
||||
## Ejemplos
|
||||
|
||||
### Bot de Telegram
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
bot_token = "123456:ABC-DEF..."
|
||||
allowed_users = [987654321] # Tu ID de usuario de Telegram
|
||||
```
|
||||
|
||||
Inicia el daemon + agent, luego envía un mensaje a tu bot en Telegram:
|
||||
|
||||
```
|
||||
/start
|
||||
¡Hola! ¿Podrías ayudarme a escribir un script Python?
|
||||
```
|
||||
|
||||
El bot responde con código generado por AI, ejecuta herramientas si se solicita, y mantiene el contexto de conversación.
|
||||
|
||||
### Matrix (cifrado extremo a extremo)
|
||||
|
||||
```toml
|
||||
[channels.matrix]
|
||||
enabled = true
|
||||
homeserver_url = "https://matrix.org"
|
||||
username = "@zeroclaw:matrix.org"
|
||||
password = "..."
|
||||
device_name = "zeroclaw-prod"
|
||||
e2ee_enabled = true
|
||||
```
|
||||
|
||||
Invita a `@zeroclaw:matrix.org` a una sala cifrada, y el bot responderá con cifrado completo. Ver [Guía Matrix E2EE](docs/matrix-e2ee-guide.md) para configuración de verificación de dispositivo.
|
||||
|
||||
### Multi-Proveedor
|
||||
|
||||
```toml
|
||||
[providers.anthropic]
|
||||
enabled = true
|
||||
api_key = "sk-ant-..."
|
||||
model = "claude-sonnet-4-20250514"
|
||||
|
||||
[providers.openai]
|
||||
enabled = true
|
||||
api_key = "sk-..."
|
||||
model = "gpt-4o"
|
||||
|
||||
[orchestrator]
|
||||
default_provider = "anthropic"
|
||||
fallback_providers = ["openai"] # Failover en error de proveedor
|
||||
```
|
||||
|
||||
Si Anthropic falla o tiene rate-limit, el orquestador hace failover automáticamente a OpenAI.
|
||||
|
||||
### Memoria Personalizada
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "sqlite"
|
||||
path = "~/.zeroclaw/workspace/memory/conversations.db"
|
||||
retention_days = 90 # Purga automática después de 90 días
|
||||
```
|
||||
|
||||
O usa Markdown para almacenamiento legible por humanos:
|
||||
|
||||
```toml
|
||||
[memory]
|
||||
kind = "markdown"
|
||||
path = "~/.zeroclaw/workspace/memory/"
|
||||
```
|
||||
|
||||
Ver [Referencia de Configuración](docs/config-reference.md#memory) para todas las opciones de memoria.
|
||||
|
||||
## Soporte de Proveedor
|
||||
|
||||
| Proveedor | Estado | API Key | Modelos de Ejemplo |
|
||||
| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
|
||||
| **Anthropic** | ✅ Estable | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
|
||||
| **OpenAI** | ✅ Estable | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
|
||||
| **Google Gemini** | ✅ Estable | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
|
||||
| **Ollama** | ✅ Estable | N/A (local) | `llama3.3`, `qwen2.5`, `phi4` |
|
||||
| **Cerebras** | ✅ Estable | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
|
||||
| **Groq** | ✅ Estable | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
|
||||
| **Mistral** | 🚧 Planificado | `MISTRAL_API_KEY` | TBD |
|
||||
| **Cohere** | 🚧 Planificado | `COHERE_API_KEY` | TBD |
|
||||
|
||||
### Endpoints Personalizados
|
||||
|
||||
ZeroClaw soporta endpoints compatibles con OpenAI:
|
||||
|
||||
```toml
|
||||
[providers.custom]
|
||||
enabled = true
|
||||
api_key = "..."
|
||||
base_url = "https://api.your-llm-provider.com/v1"
|
||||
model = "your-model-name"
|
||||
```
|
||||
|
||||
Ejemplo: usa [LiteLLM](https://github.com/BerriAI/litellm) como proxy para acceder a cualquier LLM vía interfaz OpenAI.
|
||||
|
||||
Ver [Referencia de Proveedores](docs/providers-reference.md) para detalles de configuración completos.
|
||||
|
||||
## Soporte de Canal
|
||||
|
||||
| Canal | Estado | Autenticación | Notas |
|
||||
| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
|
||||
| **Telegram** | ✅ Estable | Bot Token | Soporte completo incluyendo archivos, imágenes, botones inline |
|
||||
| **Matrix** | ✅ Estable | Contraseña o Token | Soporte E2EE con verificación de dispositivo |
|
||||
| **Slack** | 🚧 Planificado | OAuth o Bot Token | Requiere acceso a workspace |
|
||||
| **Discord** | 🚧 Planificado | Bot Token | Requiere permisos de guild |
|
||||
| **WhatsApp** | 🚧 Planificado | Twilio o API oficial | Requiere cuenta business |
|
||||
| **CLI** | ✅ Estable | Ninguno | Interfaz conversacional directa |
|
||||
| **Web** | 🚧 Planificado | API Key o OAuth | Interfaz de chat basada en navegador |
|
||||
|
||||
Ver [Referencia de Canales](docs/channels-reference.md) para instrucciones de configuración completas.
|
||||
|
||||
## Soporte de Herramientas
|
||||
|
||||
ZeroClaw proporciona herramientas integradas para ejecución de código, acceso al sistema de archivos y recuperación web:
|
||||
|
||||
| Herramienta | Descripción | Runtime Requerido |
|
||||
| -------------------- | --------------------------- | ----------------------------- |
|
||||
| **bash** | Ejecuta comandos shell | Nativo o Docker |
|
||||
| **python** | Ejecuta scripts Python | Python 3.8+ (nativo) o Docker |
|
||||
| **javascript** | Ejecuta código Node.js | Node.js 18+ (nativo) o Docker |
|
||||
| **filesystem_read** | Lee archivos | Nativo o Docker |
|
||||
| **filesystem_write** | Escribe archivos | Nativo o Docker |
|
||||
| **web_fetch** | Obtiene contenido web | Nativo o Docker |
|
||||
|
||||
### Seguridad de Ejecución
|
||||
|
||||
- **Runtime Nativo** — se ejecuta como proceso de usuario del daemon, acceso completo al sistema de archivos
|
||||
- **Runtime Docker** — aislamiento completo de contenedor, sistemas de archivos y redes separados
|
||||
|
||||
Configura la política de ejecución en `config.toml`:
|
||||
|
||||
```toml
|
||||
[runtime]
|
||||
kind = "docker"
|
||||
allowed_tools = ["bash", "python", "filesystem_read"] # Lista permitida explícita
|
||||
```
|
||||
|
||||
Ver [Referencia de Configuración](docs/config-reference.md#runtime) para opciones de seguridad completas.
|
||||
|
||||
## Despliegue
|
||||
|
||||
### Despliegue Local (Desarrollo)
|
||||
|
||||
```bash
|
||||
zeroclaw daemon start
|
||||
zeroclaw agent start
|
||||
```
|
||||
|
||||
### Despliegue en Servidor (Producción)
|
||||
|
||||
Usa systemd para gestionar el daemon y agent como servicios:
|
||||
|
||||
```bash
|
||||
# Instala el binario
|
||||
cargo install --path . --locked
|
||||
|
||||
# Configura el workspace
|
||||
zeroclaw init
|
||||
|
||||
# Crea archivos de servicio systemd
|
||||
sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
|
||||
sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
|
||||
|
||||
# Habilita e inicia los servicios
|
||||
sudo systemctl enable zeroclaw-daemon zeroclaw-agent
|
||||
sudo systemctl start zeroclaw-daemon zeroclaw-agent
|
||||
|
||||
# Verifica el estado
|
||||
sudo systemctl status zeroclaw-daemon
|
||||
sudo systemctl status zeroclaw-agent
|
||||
```
|
||||
|
||||
Ver [Guía de Despliegue de Red](docs/network-deployment.md) para instrucciones completas de despliegue en producción.
|
||||
|
||||
### Docker
|
||||
|
||||
```bash
|
||||
# Compila la imagen
|
||||
docker build -t zeroclaw:latest .
|
||||
|
||||
# Ejecuta el contenedor
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-v ~/.zeroclaw/workspace:/workspace \
|
||||
-e ANTHROPIC_API_KEY=sk-ant-... \
|
||||
zeroclaw:latest
|
||||
```
|
||||
|
||||
Ver [`Dockerfile`](Dockerfile) para detalles de build y opciones de configuración.
|
||||
|
||||
### Hardware Edge
|
||||
|
||||
ZeroClaw está diseñado para ejecutarse en hardware de bajo consumo:
|
||||
|
||||
- **Raspberry Pi Zero 2 W** — ~512 MB RAM, núcleo ARMv8 único, < $5 costo de hardware
|
||||
- **Raspberry Pi 4/5** — 1 GB+ RAM, multi-núcleo, ideal para workloads concurrentes
|
||||
- **Orange Pi Zero 2** — ~512 MB RAM, quad-core ARMv8, costo ultra-bajo
|
||||
- **SBCs x86 (Intel N100)** — 4-8 GB RAM, builds rápidos, soporte Docker nativo
|
||||
|
||||
Ver [Guía de Hardware](docs/hardware/README.md) para instrucciones de configuración específicas por dispositivo.
|
||||
|
||||
## Tunneling (Exposición Pública)
|
||||
|
||||
Expón tu daemon ZeroClaw local a la red pública vía túneles seguros:
|
||||
|
||||
```bash
|
||||
zeroclaw tunnel start --provider cloudflare
|
||||
```
|
||||
|
||||
Proveedores de tunnel soportados:
|
||||
|
||||
- **Cloudflare Tunnel** — HTTPS gratis, sin exposición de puertos, soporte multi-dominio
|
||||
- **Ngrok** — configuración rápida, dominios personalizados (plan de pago)
|
||||
- **Tailscale** — red mesh privada, sin puerto público
|
||||
|
||||
Ver [Referencia de Configuración](docs/config-reference.md#tunnel) para opciones de configuración completas.
|
||||
|
||||
## Seguridad
|
||||
|
||||
ZeroClaw implementa múltiples capas de seguridad:
|
||||
|
||||
### Emparejamiento
|
||||
|
||||
El daemon genera un secreto de emparejamiento al primer inicio almacenado en `~/.zeroclaw/workspace/.pairing`. Los clientes (agent, CLI) deben presentar este secreto para conectarse.
|
||||
|
||||
```bash
|
||||
zeroclaw pairing rotate # Genera un nuevo secreto e invalida el anterior
|
||||
```
|
||||
|
||||
### Sandboxing
|
||||
|
||||
- **Runtime Docker** — aislamiento completo de contenedor con sistemas de archivos y redes separados
|
||||
- **Runtime Nativo** — se ejecuta como proceso de usuario, con alcance de workspace por defecto
|
||||
|
||||
### Listas Permitidas
|
||||
|
||||
Los canales pueden restringir acceso por ID de usuario:
|
||||
|
||||
```toml
|
||||
[channels.telegram]
|
||||
enabled = true
|
||||
allowed_users = [123456789, 987654321] # Lista permitida explícita
|
||||
```
|
||||
|
||||
### Cifrado
|
||||
|
||||
- **Matrix E2EE** — cifrado extremo a extremo completo con verificación de dispositivo
|
||||
- **Transporte TLS** — todo el tráfico de API y tunnel usa HTTPS/TLS
|
||||
|
||||
Ver [Documentación de Seguridad](docs/security/README.md) para políticas y prácticas completas.
|
||||
|
||||
## Observabilidad
|
||||
|
||||
ZeroClaw registra logs en `~/.zeroclaw/workspace/logs/` por defecto. Los logs se almacenan por componente:
|
||||
|
||||
```
|
||||
~/.zeroclaw/workspace/logs/
|
||||
├── daemon.log # Logs del daemon (inicio, solicitudes API, errores)
|
||||
├── agent.log # Logs del agent (ruteo de mensajes, ejecución de herramientas)
|
||||
├── telegram.log # Logs específicos del canal (si está habilitado)
|
||||
└── matrix.log # Logs específicos del canal (si está habilitado)
|
||||
```
|
||||
|
||||
### Configuración de Logging
|
||||
|
||||
```toml
|
||||
[logging]
|
||||
level = "info" # debug, info, warn, error
|
||||
path = "~/.zeroclaw/workspace/logs/"
|
||||
rotation = "daily" # daily, hourly, size
|
||||
max_size_mb = 100 # Para rotación basada en tamaño
|
||||
retention_days = 30 # Purga automática después de N días
|
||||
```
|
||||
|
||||
Ver [Referencia de Configuración](docs/config-reference.md#logging) para todas las opciones de logging.
|
||||
|
||||
### Métricas (Planificado)
|
||||
|
||||
Soporte de métricas Prometheus para monitoreo en producción próximamente. Seguimiento en [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
|
||||
|
||||
## Habilidades (Skills)
|
||||
|
||||
ZeroClaw soporta habilidades personalizadas — módulos reutilizables que extienden las capacidades del sistema.
|
||||
|
||||
### Definición de Habilidad
|
||||
|
||||
Las habilidades se almacenan en `~/.zeroclaw/workspace/skills/<skill-name>/` con esta estructura:
|
||||
|
||||
```
|
||||
skills/
|
||||
└── my-skill/
|
||||
├── skill.toml # Metadatos de habilidad (nombre, descripción, dependencias)
|
||||
├── prompt.md # Prompt de sistema para la AI
|
||||
└── tools/ # Herramientas personalizadas opcionales
|
||||
└── my_tool.py
|
||||
```
|
||||
|
||||
### Ejemplo de Habilidad
|
||||
|
||||
```toml
|
||||
# skills/web-research/skill.toml
|
||||
[skill]
|
||||
name = "web-research"
|
||||
description = "Busca en la web y resume resultados"
|
||||
version = "1.0.0"
|
||||
|
||||
[dependencies]
|
||||
tools = ["web_fetch", "bash"]
|
||||
```
|
||||
|
||||
```markdown
|
||||
<!-- skills/web-research/prompt.md -->
|
||||
|
||||
Eres un asistente de investigación. Cuando te pidan buscar algo:
|
||||
|
||||
1. Usa web_fetch para obtener el contenido
|
||||
2. Resume los resultados en un formato fácil de leer
|
||||
3. Cita las fuentes con URLs
|
||||
```
|
||||
|
||||
### Uso de Habilidades
|
||||
|
||||
Las habilidades se cargan automáticamente al inicio del agent. Referéncialas por nombre en conversaciones:
|
||||
|
||||
```
|
||||
Usuario: Usa la habilidad web-research para encontrar las últimas noticias de AI
|
||||
Bot: [carga la habilidad web-research, ejecuta web_fetch, resume resultados]
|
||||
```
|
||||
|
||||
Ver sección [Habilidades (Skills)](#habilidades-skills) para instrucciones completas de creación de habilidades.
|
||||
|
||||
## Open Skills
|
||||
|
||||
ZeroClaw soporta [Open Skills](https://github.com/openagents-com/open-skills) — un sistema modular y agnóstico de proveedores para extender capacidades de agentes AI.
|
||||
|
||||
### Habilitar Open Skills
|
||||
|
||||
```toml
|
||||
[skills]
|
||||
open_skills_enabled = true
|
||||
# open_skills_dir = "/path/to/open-skills" # opcional
|
||||
```
|
||||
|
||||
También puedes sobrescribir en runtime con `ZEROCLAW_OPEN_SKILLS_ENABLED` y `ZEROCLAW_OPEN_SKILLS_DIR`.
|
||||
|
||||
## Desarrollo
|
||||
|
||||
```bash
|
||||
cargo build # Build de desarrollo
|
||||
cargo build --release # Build release (codegen-units=1, funciona en todos los dispositivos incluyendo Raspberry Pi)
|
||||
cargo build --profile release-fast # Build más rápido (codegen-units=8, requiere 16 GB+ RAM)
|
||||
cargo test # Ejecuta el suite de pruebas completo
|
||||
cargo clippy --locked --all-targets -- -D clippy::correctness
|
||||
cargo fmt # Formato
|
||||
|
||||
# Ejecuta el benchmark de comparación SQLite vs Markdown
|
||||
cargo test --test memory_comparison -- --nocapture
|
||||
```
|
||||
|
||||
### Hook pre-push
|
||||
|
||||
Un hook de git ejecuta `cargo fmt --check`, `cargo clippy -- -D warnings`, y `cargo test` antes de cada push. Actívalo una vez:
|
||||
|
||||
```bash
|
||||
git config core.hooksPath .githooks
|
||||
```
|
||||
|
||||
### Solución de Problemas de Build (errores OpenSSL en Linux)
|
||||
|
||||
Si encuentras un error de build `openssl-sys`, sincroniza dependencias y recompila con el lockfile del repositorio:
|
||||
|
||||
```bash
|
||||
git pull
|
||||
cargo build --release --locked
|
||||
cargo install --path . --force --locked
|
||||
```
|
||||
|
||||
ZeroClaw está configurado para usar `rustls` para dependencias HTTP/TLS; `--locked` mantiene el grafo transitivo determinista en entornos limpios.
|
||||
|
||||
Para saltar el hook cuando necesites un push rápido durante desarrollo:
|
||||
|
||||
```bash
|
||||
git push --no-verify
|
||||
```
|
||||
|
||||
## Colaboración y Docs
|
||||
|
||||
Comienza con el hub de documentación para un mapa basado en tareas:
|
||||
|
||||
- Hub de Documentación: [`docs/README.md`](docs/README.md)
|
||||
- Tabla de Contenidos Unificada de Docs: [`docs/SUMMARY.md`](docs/SUMMARY.md)
|
||||
- Referencia de Comandos: [`docs/commands-reference.md`](docs/commands-reference.md)
|
||||
- Referencia de Configuración: [`docs/config-reference.md`](docs/config-reference.md)
|
||||
- Referencia de Proveedores: [`docs/providers-reference.md`](docs/providers-reference.md)
|
||||
- Referencia de Canales: [`docs/channels-reference.md`](docs/channels-reference.md)
|
||||
- Runbook de Operaciones: [`docs/operations-runbook.md`](docs/operations-runbook.md)
|
||||
- Solución de Problemas: [`docs/troubleshooting.md`](docs/troubleshooting.md)
|
||||
- Inventario/Clasificación de Docs: [`docs/docs-inventory.md`](docs/docs-inventory.md)
|
||||
- Snapshot de Triage de PR/Issue (al 18 de feb. de 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
|
||||
|
||||
Referencias principales de colaboración:
|
||||
|
||||
- Hub de Documentación: [docs/README.md](docs/README.md)
|
||||
- Plantilla de Documentación: [docs/doc-template.md](docs/doc-template.md)
|
||||
- Checklist de Cambio de Documentación: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- Referencia de Configuración de Canales: [docs/channels-reference.md](docs/channels-reference.md)
|
||||
- Operaciones de Salas Cifradas Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
|
||||
- Guía de Contribución: [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- Política de Flujo de Trabajo PR: [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- Playbook del Revisor (triage + revisión profunda): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- Mapa de Propiedad y Triage CI: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Política de Divulgación de Seguridad: [SECURITY.md](SECURITY.md)
|
||||
|
||||
Para despliegue y operaciones de runtime:
|
||||
|
||||
- Guía de Despliegue de Red: [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- Playbook de Agent Proxy: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
|
||||
## Apoyar a ZeroClaw
|
||||
|
||||
Si ZeroClaw ayuda a tu trabajo y deseas apoyar el desarrollo continuo, puedes donar aquí:
|
||||
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=for-the-badge&logo=buy-me-a-coffee" alt="Cómprame un Café" /></a>
|
||||
|
||||
### 🙏 Agradecimientos Especiales
|
||||
|
||||
Un sincero agradecimiento a las comunidades e instituciones que inspiran y alimentan este trabajo de código abierto:
|
||||
|
||||
- **Harvard University** — por fomentar la curiosidad intelectual y empujar los límites de lo posible.
|
||||
- **MIT** — por defender el conocimiento abierto, el código abierto, y la convicción de que la tecnología debería ser accesible para todos.
|
||||
- **Sundai Club** — por la comunidad, la energía, y la voluntad incesante de construir cosas que importan.
|
||||
- **El Mundo y Más Allá** 🌍✨ — a cada contribuyente, soñador, y constructor allá afuera que hace del código abierto una fuerza para el bien. Esto es por ti.
|
||||
|
||||
Construimos en código abierto porque las mejores ideas vienen de todas partes. Si estás leyendo esto, eres parte de esto. Bienvenido. 🦀❤️
|
||||
|
||||
## ⚠️ Repositorio Oficial y Advertencia de Suplantación
|
||||
|
||||
**Este es el único repositorio oficial de ZeroClaw:**
|
||||
|
||||
> <https://github.com/zeroclaw-labs/zeroclaw>
|
||||
|
||||
Cualquier otro repositorio, organización, dominio o paquete que afirme ser "ZeroClaw" o que implique afiliación con ZeroClaw Labs es **no autorizado y no está afiliado con este proyecto**. Los forks no autorizados conocidos serán listados en [TRADEMARK.md](TRADEMARK.md).
|
||||
|
||||
Si encuentras suplantación o uso indebido de marca, por favor [abre un issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
|
||||
|
||||
---
|
||||
|
||||
## Licencia
|
||||
|
||||
ZeroClaw tiene doble licencia para máxima apertura y protección de contribuyentes:
|
||||
|
||||
| Licencia | Casos de Uso |
|
||||
| ---------------------------- | ------------------------------------------------------------ |
|
||||
| [MIT](LICENSE-MIT) | Código abierto, investigación, académico, uso personal |
|
||||
| [Apache 2.0](LICENSE-APACHE) | Protección de patentes, institucional, despliegue comercial |
|
||||
|
||||
Puedes elegir cualquiera de las dos licencias. **Los contribuyentes otorgan automáticamente derechos bajo ambas** — ver [CLA.md](CLA.md) para el acuerdo de contribuyente completo.
|
||||
|
||||
### Marca
|
||||
|
||||
El nombre **ZeroClaw** y el logo son marcas registradas de ZeroClaw Labs. Esta licencia no otorga permiso para usarlos para implicar aprobación o afiliación. Ver [TRADEMARK.md](TRADEMARK.md) para usos permitidos y prohibidos.
|
||||
|
||||
### Protecciones del Contribuyente
|
||||
|
||||
- **Mantienes los derechos de autor** de tus contribuciones
|
||||
- **Concesión de patentes** (Apache 2.0) te protege contra reclamos de patentes por otros contribuyentes
|
||||
- Tus contribuciones son **atribuidas permanentemente** en el historial de commits y [NOTICE](NOTICE)
|
||||
- No se transfieren derechos de marca al contribuir
|
||||
|
||||
## Contribuir
|
||||
|
||||
Ver [CONTRIBUTING.md](CONTRIBUTING.md) y [CLA.md](CLA.md). Implementa un trait, envía una PR:
|
||||
|
||||
- Guía de flujo de trabajo CI: [docs/ci-map.md](docs/ci-map.md)
|
||||
- Nuevo `Provider` → `src/providers/`
|
||||
- Nuevo `Channel` → `src/channels/`
|
||||
- Nuevo `Observer` → `src/observability/`
|
||||
- Nuevo `Tool` → `src/tools/`
|
||||
- Nueva `Memory` → `src/memory/`
|
||||
- Nuevo `Tunnel` → `src/tunnel/`
|
||||
- Nueva `Skill` → `~/.zeroclaw/workspace/skills/<n>/`
|
||||
|
||||
---
|
||||
|
||||
**ZeroClaw** — Cero sobrecarga. Cero compromiso. Despliega en cualquier lugar. Intercambia cualquier cosa. 🦀
|
||||
|
||||
## Historial de Estrellas
|
||||
|
||||
<p align="center">
|
||||
<a href="https://www.star-history.com/#zeroclaw-labs/zeroclaw&type=date&legend=top-left">
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&theme=dark&legend=top-left" />
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
<img alt="Gráfico de Historial de Estrellas" src="https://api.star-history.com/svg?repos=zeroclaw-labs/zeroclaw&type=date&legend=top-left" />
|
||||
</picture>
|
||||
</a>
|
||||
</p>
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Noll overhead. Noll kompromissi. 100% Rust. 100% Agnostinen.</strong><br>
|
||||
⚡️ <strong>Ajaa $10 laitteistolla <5MB RAM:lla: Tämä on 99% vähemmän muistia kuin OpenClaw ja 98% halvempi kuin Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Kielet:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Mikä on ZeroClaw?
|
||||
|
||||
ZeroClaw on kevyt, muokattava ja laajennettava AI-assistentti-infrastruktuuri, joka on rakennettu Rustilla. Se yhdistää eri LLM-palveluntarjoajat (Anthropic, OpenAI, Google, Ollama jne.) yhtenäisen käyttöliittymän kautta ja tukee useita kanavia (Telegram, Matrix, CLI jne.).
|
||||
|
||||
### Keskeiset Ominaisuudet
|
||||
|
||||
- **🦀 Kirjoitettu Rustilla**: Korkea suorituskyky, muistiturvallisuus ja nollakustannus-abstraktiot
|
||||
- **🔌 Palveluntarjoaja-agnostinen**: Tukee OpenAI, Anthropic, Google Gemini, Ollama ja muita
|
||||
- **📱 Monikanavainen**: Telegram, Matrix (E2EE:llä), CLI ja muut
|
||||
- **🧠 Pluggaava muisti**: SQLite ja Markdown-backendit
|
||||
- **🛠️ Laajennettavat työkalut**: Lisää mukautettuja työkaluja helposti
|
||||
- **🔒 Turvallisuus edellä**: Käänteinen proxy, yksityisyys-edellä-suunnittelu
|
||||
|
||||
---
|
||||
|
||||
## Pika-aloitus
|
||||
|
||||
### Vaatimukset
|
||||
|
||||
- Rust 1.70+
|
||||
- LLM-palveluntarjoajan API-avain (Anthropic, OpenAI jne.)
|
||||
|
||||
### Asennus
|
||||
|
||||
```bash
|
||||
# Kloonaa repository
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# Rakenna
|
||||
cargo build --release
|
||||
|
||||
# Aja
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Dockerilla
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Konfiguraatio
|
||||
|
||||
ZeroClaw käyttää YAML-konfiguraatiotiedostoa. Oletuksena se etsii `config.yaml`.
|
||||
|
||||
```yaml
|
||||
# Oletuspalveluntarjoaja
|
||||
provider: anthropic
|
||||
|
||||
# Palveluntarjoajien konfiguraatio
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# Muistin konfiguraatio
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# Kanavien konfiguraatio
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dokumentaatio
|
||||
|
||||
Yksityiskohtaista dokumentaatiota varten katso:
|
||||
|
||||
- [Dokumentaatiokeskus](docs/README.md)
|
||||
- [Komentojen Viite](docs/commands-reference.md)
|
||||
- [Palveluntarjoajien Viite](docs/providers-reference.md)
|
||||
- [Kanavien Viite](docs/channels-reference.md)
|
||||
- [Konfiguraation Viite](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Osallistuminen
|
||||
|
||||
Osallistumiset ovat tervetulleita! Lue [Osallistumisopas](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
## Lisenssi
|
||||
|
||||
Tämä projekti on kaksoislisensoitu:
|
||||
|
||||
- MIT License
|
||||
- Apache License, versio 2.0
|
||||
|
||||
Katso [LICENSE-APACHE](LICENSE-APACHE) ja [LICENSE-MIT](LICENSE-MIT) yksityiskohdille.
|
||||
|
||||
---
|
||||
|
||||
## Yhteisö
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## Sponsorit
|
||||
|
||||
Jos ZeroClaw on hyödyllinen sinulle, harkitse kahvin ostamista meille:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+79
-52
@@ -1,5 +1,5 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
<img src="docs/assets/zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
@@ -14,11 +14,7 @@
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributeurs" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Offrez-moi un café" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X : @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu : Officiel" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram : @zeroclawlabs" /></a>
|
||||
<a href="https://t.me/zeroclawlabs_cn"><img src="https://img.shields.io/badge/Telegram%20CN-%40zeroclawlabs__cn-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram CN : @zeroclawlabs_cn" /></a>
|
||||
<a href="https://t.me/zeroclawlabs_ru"><img src="https://img.shields.io/badge/Telegram%20RU-%40zeroclawlabs__ru-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram RU : @zeroclawlabs_ru" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
<a href="https://www.reddit.com/r/zeroclawlabs/"><img src="https://img.shields.io/badge/Reddit-r%2Fzeroclawlabs-FF4500?style=flat&logo=reddit&logoColor=white" alt="Reddit : r/zeroclawlabs" /></a>
|
||||
</p>
|
||||
<p align="center">
|
||||
@@ -26,12 +22,43 @@ Construit par des étudiants et membres des communautés Harvard, MIT et Sundai.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Langues :</strong> <a href="README.md">English</a> · <a href="README.zh-CN.md">简体中文</a> · <a href="README.ja.md">日本語</a> · <a href="README.ru.md">Русский</a> · <a href="README.fr.md">Français</a> · <a href="README.vi.md">Tiếng Việt</a>
|
||||
🌐 <strong>Langues :</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#démarrage-rapide">Démarrage</a> |
|
||||
<a href="bootstrap.sh">Configuration en un clic</a> |
|
||||
<a href="install.sh">Configuration en un clic</a> |
|
||||
<a href="docs/README.md">Hub Documentation</a> |
|
||||
<a href="docs/SUMMARY.md">Table des matières Documentation</a>
|
||||
</p>
|
||||
@@ -39,8 +66,8 @@ Construit par des étudiants et membres des communautés Harvard, MIT et Sundai.
|
||||
<p align="center">
|
||||
<strong>Accès rapides :</strong>
|
||||
<a href="docs/reference/README.md">Référence</a> ·
|
||||
<a href="docs/operations/README.md">Opérations</a> ·
|
||||
<a href="docs/troubleshooting.md">Dépannage</a> ·
|
||||
<a href="docs/ops/README.md">Opérations</a> ·
|
||||
<a href="docs/ops/troubleshooting.md">Dépannage</a> ·
|
||||
<a href="docs/security/README.md">Sécurité</a> ·
|
||||
<a href="docs/hardware/README.md">Matériel</a> ·
|
||||
<a href="docs/contributing/README.md">Contribuer</a>
|
||||
@@ -64,7 +91,7 @@ Utilisez ce tableau pour les avis importants (changements incompatibles, avis de
|
||||
| Date (UTC) | Niveau | Avis | Action |
|
||||
| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 2026-02-19 | _Critique_ | Nous ne sommes **pas affiliés** à `openagen/zeroclaw` ou `zeroclaw.org`. Le domaine `zeroclaw.org` pointe actuellement vers le fork `openagen/zeroclaw`, et ce domaine/dépôt usurpe l'identité de notre site web/projet officiel. | Ne faites pas confiance aux informations, binaires, levées de fonds ou annonces provenant de ces sources. Utilisez uniquement [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) et nos comptes sociaux vérifiés. |
|
||||
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Telegram CN (@zeroclawlabs_cn)](https://t.me/zeroclawlabs_cn), [Telegram RU (@zeroclawlabs_ru)](https://t.me/zeroclawlabs_ru), et [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) pour les mises à jour officielles. |
|
||||
| 2026-02-21 | _Important_ | Notre site officiel est désormais en ligne : [zeroclawlabs.ai](https://zeroclawlabs.ai). Merci pour votre patience pendant cette attente. Nous constatons toujours des tentatives d'usurpation : ne participez à aucune activité d'investissement/financement au nom de ZeroClaw si elle n'est pas publiée via nos canaux officiels. | Utilisez [ce dépôt](https://github.com/zeroclaw-labs/zeroclaw) comme source unique de vérité. Suivez [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Facebook (groupe)](https://www.facebook.com/groups/zeroclaw), et [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) pour les mises à jour officielles. |
|
||||
| 2026-02-19 | _Important_ | Anthropic a mis à jour les conditions d'utilisation de l'authentification et des identifiants le 2026-02-19. L'authentification OAuth (Free, Pro, Max) est exclusivement destinée à Claude Code et Claude.ai ; l'utilisation de tokens OAuth de Claude Free/Pro/Max dans tout autre produit, outil ou service (y compris Agent SDK) n'est pas autorisée et peut violer les Conditions d'utilisation grand public. | Veuillez temporairement éviter les intégrations OAuth de Claude Code pour prévenir toute perte potentielle. Clause originale : [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
|
||||
|
||||
### ✨ Fonctionnalités
|
||||
@@ -96,7 +123,7 @@ Benchmark rapide sur machine locale (macOS arm64, fév. 2026) normalisé pour ma
|
||||
> Notes : Les résultats ZeroClaw sont mesurés sur des builds de production utilisant `/usr/bin/time -l`. OpenClaw nécessite le runtime Node.js (typiquement ~390 Mo de surcharge mémoire supplémentaire), tandis que NanoBot nécessite le runtime Python. PicoClaw et ZeroClaw sont des binaires statiques. Les chiffres RAM ci-dessus sont la mémoire runtime ; les exigences de compilation build-time sont plus élevées.
|
||||
|
||||
<p align="center">
|
||||
<img src="zero-claw.jpeg" alt="Comparaison ZeroClaw vs OpenClaw" width="800" />
|
||||
<img src="docs/assets/zeroclaw-comparison.jpeg" alt="Comparaison ZeroClaw vs OpenClaw" width="800" />
|
||||
</p>
|
||||
|
||||
### Mesure locale reproductible
|
||||
@@ -189,10 +216,10 @@ Exemple d'échantillon (macOS arm64, mesuré le 18 février 2026) :
|
||||
|
||||
### Option 1 : Configuration automatisée (recommandée)
|
||||
|
||||
Le script `bootstrap.sh` installe Rust, clone ZeroClaw, le compile, et configure votre environnement de développement initial :
|
||||
Le script `install.sh` installe Rust, clone ZeroClaw, le compile, et configure votre environnement de développement initial :
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
|
||||
curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/master/install.sh | bash
|
||||
```
|
||||
|
||||
Ceci va :
|
||||
@@ -248,9 +275,9 @@ Une fois installé (via bootstrap ou manuellement), vous devriez voir :
|
||||
**Prochaines étapes :**
|
||||
|
||||
1. Configurez vos fournisseurs d'IA dans `~/.zeroclaw/workspace/config.toml`
|
||||
2. Consultez la [référence de configuration](docs/config-reference.md) pour les options avancées
|
||||
2. Consultez la [référence de configuration](docs/reference/api/config-reference.md) pour les options avancées
|
||||
3. Lancez l'agent : `zeroclaw agent start`
|
||||
4. Testez via votre canal préféré (voir [référence des canaux](docs/channels-reference.md))
|
||||
4. Testez via votre canal préféré (voir [référence des canaux](docs/reference/api/channels-reference.md))
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -286,10 +313,10 @@ kind = "native" # ou "docker" (nécessite Docker)
|
||||
|
||||
**Documents de référence complets :**
|
||||
|
||||
- [Référence de Configuration](docs/config-reference.md) — tous les paramètres, validations, valeurs par défaut
|
||||
- [Référence des Fournisseurs](docs/providers-reference.md) — configurations spécifiques aux fournisseurs d'IA
|
||||
- [Référence des Canaux](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord et plus
|
||||
- [Opérations](docs/operations-runbook.md) — surveillance en production, rotation des secrets, mise à l'échelle
|
||||
- [Référence de Configuration](docs/reference/api/config-reference.md) — tous les paramètres, validations, valeurs par défaut
|
||||
- [Référence des Fournisseurs](docs/reference/api/providers-reference.md) — configurations spécifiques aux fournisseurs d'IA
|
||||
- [Référence des Canaux](docs/reference/api/channels-reference.md) — Telegram, Matrix, Slack, Discord et plus
|
||||
- [Opérations](docs/ops/operations-runbook.md) — surveillance en production, rotation des secrets, mise à l'échelle
|
||||
|
||||
### Support Runtime (actuel)
|
||||
|
||||
@@ -298,7 +325,7 @@ ZeroClaw prend en charge deux backends d'exécution de code :
|
||||
- **`native`** (par défaut) — exécution de processus directe, chemin le plus rapide, idéal pour les environnements de confiance
|
||||
- **`docker`** — isolation complète du conteneur, politiques de sécurité renforcées, nécessite Docker
|
||||
|
||||
Utilisez `runtime.kind = "docker"` si vous avez besoin d'un sandboxing strict ou de l'isolation réseau. Voir [référence de configuration](docs/config-reference.md#runtime) pour les détails complets.
|
||||
Utilisez `runtime.kind = "docker"` si vous avez besoin d'un sandboxing strict ou de l'isolation réseau. Voir [référence de configuration](docs/reference/api/config-reference.md#runtime) pour les détails complets.
|
||||
|
||||
## Commandes
|
||||
|
||||
@@ -332,7 +359,7 @@ zeroclaw doctor # Exécute les vérifications de santé du système
|
||||
zeroclaw version # Affiche la version et les informations de build
|
||||
```
|
||||
|
||||
Voir [Référence des Commandes](docs/commands-reference.md) pour les options et exemples complets.
|
||||
Voir [Référence des Commandes](docs/reference/cli/commands-reference.md) pour les options et exemples complets.
|
||||
|
||||
## Architecture
|
||||
|
||||
@@ -379,7 +406,7 @@ Voir [Référence des Commandes](docs/commands-reference.md) pour les options et
|
||||
- Le runtime abstrait l'exécution de code (natif ou Docker)
|
||||
- Aucun verrouillage de fournisseur — échangez Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama sans changement de code
|
||||
|
||||
Voir [documentation architecture](docs/architecture.svg) pour les diagrammes détaillés et les détails d'implémentation.
|
||||
Voir [documentation architecture](docs/assets/architecture.svg) pour les diagrammes détaillés et les détails d'implémentation.
|
||||
|
||||
## Exemples
|
||||
|
||||
@@ -413,7 +440,7 @@ device_name = "zeroclaw-prod"
|
||||
e2ee_enabled = true
|
||||
```
|
||||
|
||||
Invitez `@zeroclaw:matrix.org` dans une salle chiffrée, et le bot répondra avec le chiffrement complet. Voir [Guide Matrix E2EE](docs/matrix-e2ee-guide.md) pour la configuration de vérification de dispositif.
|
||||
Invitez `@zeroclaw:matrix.org` dans une salle chiffrée, et le bot répondra avec le chiffrement complet. Voir [Guide Matrix E2EE](docs/security/matrix-e2ee-guide.md) pour la configuration de vérification de dispositif.
|
||||
|
||||
### Multi-Fournisseur
|
||||
|
||||
@@ -452,7 +479,7 @@ kind = "markdown"
|
||||
path = "~/.zeroclaw/workspace/memory/"
|
||||
```
|
||||
|
||||
Voir [Référence de Configuration](docs/config-reference.md#memory) pour toutes les options mémoire.
|
||||
Voir [Référence de Configuration](docs/reference/api/config-reference.md#memory) pour toutes les options mémoire.
|
||||
|
||||
## Support de Fournisseur
|
||||
|
||||
@@ -481,7 +508,7 @@ model = "your-model-name"
|
||||
|
||||
Exemple : utilisez [LiteLLM](https://github.com/BerriAI/litellm) comme proxy pour accéder à n'importe quel LLM via l'interface OpenAI.
|
||||
|
||||
Voir [Référence des Fournisseurs](docs/providers-reference.md) pour les détails de configuration complets.
|
||||
Voir [Référence des Fournisseurs](docs/reference/api/providers-reference.md) pour les détails de configuration complets.
|
||||
|
||||
## Support de Canal
|
||||
|
||||
@@ -495,7 +522,7 @@ Voir [Référence des Fournisseurs](docs/providers-reference.md) pour les détai
|
||||
| **CLI** | ✅ Stable | Aucun | Interface conversationnelle directe |
|
||||
| **Web** | 🚧 Planifié | Clé API ou OAuth | Interface de chat basée navigateur |
|
||||
|
||||
Voir [Référence des Canaux](docs/channels-reference.md) pour les instructions de configuration complètes.
|
||||
Voir [Référence des Canaux](docs/reference/api/channels-reference.md) pour les instructions de configuration complètes.
|
||||
|
||||
## Support d'Outil
|
||||
|
||||
@@ -523,7 +550,7 @@ kind = "docker"
|
||||
allowed_tools = ["bash", "python", "filesystem_read"] # Liste d'autorisation explicite
|
||||
```
|
||||
|
||||
Voir [Référence de Configuration](docs/config-reference.md#runtime) pour les options de sécurité complètes.
|
||||
Voir [Référence de Configuration](docs/reference/api/config-reference.md#runtime) pour les options de sécurité complètes.
|
||||
|
||||
## Déploiement
|
||||
|
||||
@@ -558,7 +585,7 @@ sudo systemctl status zeroclaw-daemon
|
||||
sudo systemctl status zeroclaw-agent
|
||||
```
|
||||
|
||||
Voir [Guide de Déploiement Réseau](docs/network-deployment.md) pour les instructions de déploiement en production complètes.
|
||||
Voir [Guide de Déploiement Réseau](docs/ops/network-deployment.md) pour les instructions de déploiement en production complètes.
|
||||
|
||||
### Docker
|
||||
|
||||
@@ -601,7 +628,7 @@ Fournisseurs de tunnel supportés :
|
||||
- **Ngrok** — configuration rapide, domaines personnalisés (plan payant)
|
||||
- **Tailscale** — réseau maillé privé, pas de port public
|
||||
|
||||
Voir [Référence de Configuration](docs/config-reference.md#tunnel) pour les options de configuration complètes.
|
||||
Voir [Référence de Configuration](docs/reference/api/config-reference.md#tunnel) pour les options de configuration complètes.
|
||||
|
||||
## Sécurité
|
||||
|
||||
@@ -660,7 +687,7 @@ max_size_mb = 100 # Pour rotation basée sur la taille
|
||||
retention_days = 30 # Purge automatique après N jours
|
||||
```
|
||||
|
||||
Voir [Référence de Configuration](docs/config-reference.md#logging) pour toutes les options de journalisation.
|
||||
Voir [Référence de Configuration](docs/reference/api/config-reference.md#logging) pour toutes les options de journalisation.
|
||||
|
||||
### Métriques (Planifié)
|
||||
|
||||
@@ -777,32 +804,32 @@ Commencez par le hub de documentation pour une carte basée sur les tâches :
|
||||
|
||||
- Hub de documentation : [`docs/README.md`](docs/README.md)
|
||||
- Table des matières unifiée docs : [`docs/SUMMARY.md`](docs/SUMMARY.md)
|
||||
- Référence des commandes : [`docs/commands-reference.md`](docs/commands-reference.md)
|
||||
- Référence de configuration : [`docs/config-reference.md`](docs/config-reference.md)
|
||||
- Référence des fournisseurs : [`docs/providers-reference.md`](docs/providers-reference.md)
|
||||
- Référence des canaux : [`docs/channels-reference.md`](docs/channels-reference.md)
|
||||
- Runbook des opérations : [`docs/operations-runbook.md`](docs/operations-runbook.md)
|
||||
- Dépannage : [`docs/troubleshooting.md`](docs/troubleshooting.md)
|
||||
- Inventaire/classification docs : [`docs/docs-inventory.md`](docs/docs-inventory.md)
|
||||
- Instantané triage PR/Issue (au 18 février 2026) : [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
|
||||
- Référence des commandes : [`docs/reference/cli/commands-reference.md`](docs/reference/cli/commands-reference.md)
|
||||
- Référence de configuration : [`docs/reference/api/config-reference.md`](docs/reference/api/config-reference.md)
|
||||
- Référence des fournisseurs : [`docs/reference/api/providers-reference.md`](docs/reference/api/providers-reference.md)
|
||||
- Référence des canaux : [`docs/reference/api/channels-reference.md`](docs/reference/api/channels-reference.md)
|
||||
- Runbook des opérations : [`docs/ops/operations-runbook.md`](docs/ops/operations-runbook.md)
|
||||
- Dépannage : [`docs/ops/troubleshooting.md`](docs/ops/troubleshooting.md)
|
||||
- Inventaire/classification docs : [`docs/maintainers/docs-inventory.md`](docs/maintainers/docs-inventory.md)
|
||||
- Instantané triage PR/Issue (au 18 février 2026) : [`docs/maintainers/project-triage-snapshot-2026-02-18.md`](docs/maintainers/project-triage-snapshot-2026-02-18.md)
|
||||
|
||||
Références de collaboration principales :
|
||||
|
||||
- Hub de documentation : [docs/README.md](docs/README.md)
|
||||
- Modèle de documentation : [docs/doc-template.md](docs/doc-template.md)
|
||||
- Modèle de documentation : [docs/contributing/doc-template.md](docs/contributing/doc-template.md)
|
||||
- Checklist de modification de documentation : [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
|
||||
- Référence de configuration des canaux : [docs/channels-reference.md](docs/channels-reference.md)
|
||||
- Opérations de salles chiffrées Matrix : [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
|
||||
- Référence de configuration des canaux : [docs/reference/api/channels-reference.md](docs/reference/api/channels-reference.md)
|
||||
- Opérations de salles chiffrées Matrix : [docs/security/matrix-e2ee-guide.md](docs/security/matrix-e2ee-guide.md)
|
||||
- Guide de contribution : [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
- Politique de workflow PR : [docs/pr-workflow.md](docs/pr-workflow.md)
|
||||
- Playbook du relecteur (triage + revue approfondie) : [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
|
||||
- Carte de propriété et triage CI : [docs/ci-map.md](docs/ci-map.md)
|
||||
- Politique de workflow PR : [docs/contributing/pr-workflow.md](docs/contributing/pr-workflow.md)
|
||||
- Playbook du relecteur (triage + revue approfondie) : [docs/contributing/reviewer-playbook.md](docs/contributing/reviewer-playbook.md)
|
||||
- Carte de propriété et triage CI : [docs/contributing/ci-map.md](docs/contributing/ci-map.md)
|
||||
- Politique de divulgation de sécurité : [SECURITY.md](SECURITY.md)
|
||||
|
||||
Pour le déploiement et les opérations runtime :
|
||||
|
||||
- Guide de déploiement réseau : [docs/network-deployment.md](docs/network-deployment.md)
|
||||
- Playbook d'agent proxy : [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
|
||||
- Guide de déploiement réseau : [docs/ops/network-deployment.md](docs/ops/network-deployment.md)
|
||||
- Playbook d'agent proxy : [docs/ops/proxy-agent-playbook.md](docs/ops/proxy-agent-playbook.md)
|
||||
|
||||
## Soutenir ZeroClaw
|
||||
|
||||
@@ -827,7 +854,7 @@ Nous construisons en open source parce que les meilleures idées viennent de par
|
||||
|
||||
> <https://github.com/zeroclaw-labs/zeroclaw>
|
||||
|
||||
Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](TRADEMARK.md).
|
||||
Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](docs/maintainers/trademark.md).
|
||||
|
||||
Si vous rencontrez une usurpation d'identité ou une utilisation abusive de marque, veuillez [ouvrir une issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
|
||||
|
||||
@@ -842,11 +869,11 @@ ZeroClaw est sous double licence pour une ouverture maximale et la protection de
|
||||
| [MIT](LICENSE-MIT) | Open-source, recherche, académique, usage personnel |
|
||||
| [Apache 2.0](LICENSE-APACHE) | Protection de brevet, institutionnel, déploiement commercial |
|
||||
|
||||
Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](CLA.md) pour l'accord de contributeur complet.
|
||||
Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](docs/contributing/cla.md) pour l'accord de contributeur complet.
|
||||
|
||||
### Marque
|
||||
|
||||
Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](TRADEMARK.md) pour les utilisations permises et interdites.
|
||||
Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](docs/maintainers/trademark.md) pour les utilisations permises et interdites.
|
||||
|
||||
### Protections des Contributeurs
|
||||
|
||||
@@ -857,9 +884,9 @@ Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cet
|
||||
|
||||
## Contribuer
|
||||
|
||||
Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](CLA.md). Implémentez un trait, soumettez une PR :
|
||||
Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](docs/contributing/cla.md). Implémentez un trait, soumettez une PR :
|
||||
|
||||
- Guide de workflow CI : [docs/ci-map.md](docs/ci-map.md)
|
||||
- Guide de workflow CI : [docs/contributing/ci-map.md](docs/contributing/ci-map.md)
|
||||
- Nouveau `Provider` → `src/providers/`
|
||||
- Nouveau `Channel` → `src/channels/`
|
||||
- Nouveau `Observer` → `src/observability/`
|
||||
|
||||
+197
@@ -0,0 +1,197 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
<strong>תקורת אפס. אין פשרות. 100% Rust. 100% אגנוסטי.</strong><br>
|
||||
⚡️ <strong>פועל על חומרה ב-$10 עם <5MB זיכרון: זה 99% פחות זיכרון מ-OpenClaw ו-98% זול יותר מ-Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
🌐 <strong>שפות:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## מה זה ZeroClaw?
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
ZeroClaw הוא תשתית עוזר AI קלת משקל, מוטטבילית וניתנת להרחבה שנבנתה ב-Rust. היא מחברת ספקי LLM שונים (Anthropic, OpenAI, Google, Ollama, וכו') דרך ממשק מאוחד ותומכת בערוצים מרובים (Telegram, Matrix, CLI, וכו').
|
||||
</p>
|
||||
|
||||
### תכונות עיקריות
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
- **🦀 נכתב ב-Rust**: ביצועים גבוהים, אבטחת זיכרון, ואבסטרקציות ללא עלות
|
||||
- **🔌 אגנוסטי לספקים**: תמיכה ב-OpenAI, Anthropic, Google Gemini, Ollama, ואחרים
|
||||
- **📱 ערוצים מרובים**: Telegram, Matrix (עם E2EE), CLI, ואחרים
|
||||
- **🧠 זיכרון ניתן להחלפה**: Backend של SQLite ו-Markdown
|
||||
- **🛠️ כלים ניתנים להרחבה**: הוסף כלים מותאמים אישית בקלות
|
||||
- **🔒 אבטחה תחילה**: פרוקסי הפוך, עיצוב מותחל על פרטיות
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## התחלה מהירה
|
||||
|
||||
### דרישות מוקדמות
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
- Rust 1.70+
|
||||
- מפתח API של ספק LLM (Anthropic, OpenAI, וכו')
|
||||
</p>
|
||||
|
||||
### התקנה
|
||||
|
||||
```bash
|
||||
# שכפל את המאגר
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# בנה
|
||||
cargo build --release
|
||||
|
||||
# הפעל
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### עם Docker
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## קונפיגורציה
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
ZeroClaw משתמש בקובץ קונפיגורציה YAML. כברירת מחדל, הוא מחפש `config.yaml`.
|
||||
</p>
|
||||
|
||||
```yaml
|
||||
# ספק ברירת מחדל
|
||||
provider: anthropic
|
||||
|
||||
# קונפיגורציית ספקים
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# קונפיגורציית זיכרון
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# קונפיגורציית ערוצים
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## תיעוד
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
לתיעוד מפורט, ראה:
|
||||
</p>
|
||||
|
||||
- [מרכז התיעוד](docs/README.md)
|
||||
- [הפניה לפקודות](docs/commands-reference.md)
|
||||
- [הפניה לספקים](docs/providers-reference.md)
|
||||
- [הפניה לערוצים](docs/channels-reference.md)
|
||||
- [הפניה לקונפיגורציה](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## תרומות
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
תרומות מוזמנות! אנא קרא את [מדריך התרומות](CONTRIBUTING.md).
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## רישיון
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
פרויקט זה מורשה ברישיון כפול:
|
||||
</p>
|
||||
|
||||
- MIT License
|
||||
- Apache License, גרסה 2.0
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
ראה [LICENSE-APACHE](LICENSE-APACHE) ו-[LICENSE-MIT](LICENSE-MIT) לפרטים.
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## קהילה
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## נותני חסות
|
||||
|
||||
<p align="center" dir="rtl">
|
||||
אם ZeroClaw שימושי עבורך, אנא שקול לקנות לנו קפה:
|
||||
</p>
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>शून्य ओवरहेड। शून्य समझौता। 100% रस्ट। 100% अज्ञेयवादी।</strong><br>
|
||||
⚡️ <strong>$10 हार्डवेयर पर <5MB RAM के साथ चलता है: यह OpenClaw से 99% कम मेमोरी और Mac mini से 98% सस्ता है!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>भाषाएँ:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## ZeroClaw क्या है?
|
||||
|
||||
ZeroClaw एक हल्का, म्यूटेबल और एक्स्टेंसिबल AI असिस्टेंट इन्फ्रास्ट्रक्चर है जो रस्ट में बनाया गया है। यह विभिन्न LLM प्रदाताओं (Anthropic, OpenAI, Google, Ollama, आदि) को एक एकीकृत इंटरफेस के माध्यम से कनेक्ट करता है और कई चैनलों (Telegram, Matrix, CLI, आदि) का समर्थन करता है।
|
||||
|
||||
### मुख्य विशेषताएं
|
||||
|
||||
- **🦀 रस्ट में लिखा गया**: उच्च प्रदर्शन, मेमोरी सुरक्षा, और शून्य-लागत एब्सट्रैक्शन
|
||||
- **🔌 प्रदाता-अज्ञेयवादी**: OpenAI, Anthropic, Google Gemini, Ollama, और अन्य का समर्थन
|
||||
- **📱 बहु-चैनल**: Telegram, Matrix (E2EE के साथ), CLI, और अन्य
|
||||
- **🧠 प्लगेबल मेमोरी**: SQLite और Markdown बैकएंड
|
||||
- **🛠️ विस्तार योग्य टूल**: आसानी से कस्टम टूल जोड़ें
|
||||
- **🔒 सुरक्षा-पहले**: रिवर्स-प्रॉक्सी, गोपनीयता-पहले डिज़ाइन
|
||||
|
||||
---
|
||||
|
||||
## त्वरित शुरुआत
|
||||
|
||||
### आवश्यकताएं
|
||||
|
||||
- रस्ट 1.70+
|
||||
- एक LLM प्रदाता API कुंजी (Anthropic, OpenAI, आदि)
|
||||
|
||||
### इंस्टॉलेशन
|
||||
|
||||
```bash
|
||||
# रिपॉजिटरी क्लोन करें
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# बिल्ड करें
|
||||
cargo build --release
|
||||
|
||||
# चलाएं
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Docker के साथ
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## कॉन्फ़िगरेशन
|
||||
|
||||
ZeroClaw एक YAML कॉन्फ़िगरेशन फ़ाइल का उपयोग करता है। डिफ़ॉल्ट रूप से, यह `config.yaml` देखता है।
|
||||
|
||||
```yaml
|
||||
# डिफ़ॉल्ट प्रदाता
|
||||
provider: anthropic
|
||||
|
||||
# प्रदाता कॉन्फ़िगरेशन
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# मेमोरी कॉन्फ़िगरेशन
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# चैनल कॉन्फ़िगरेशन
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## दस्तावेज़ीकरण
|
||||
|
||||
विस्तृत दस्तावेज़ीकरण के लिए, देखें:
|
||||
|
||||
- [दस्तावेज़ीकरण हब](docs/README.md)
|
||||
- [कमांड संदर्भ](docs/commands-reference.md)
|
||||
- [प्रदाता संदर्भ](docs/providers-reference.md)
|
||||
- [चैनल संदर्भ](docs/channels-reference.md)
|
||||
- [कॉन्फ़िगरेशन संदर्भ](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## योगदान
|
||||
|
||||
योगदान का स्वागत है! कृपया [योगदान गाइड](CONTRIBUTING.md) पढ़ें।
|
||||
|
||||
---
|
||||
|
||||
## लाइसेंस
|
||||
|
||||
यह प्रोजेक्ट दोहरे लाइसेंस प्राप्त है:
|
||||
|
||||
- MIT लाइसेंस
|
||||
- Apache लाइसेंस, संस्करण 2.0
|
||||
|
||||
विवरण के लिए [LICENSE-APACHE](LICENSE-APACHE) और [LICENSE-MIT](LICENSE-MIT) देखें।
|
||||
|
||||
---
|
||||
|
||||
## समुदाय
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## प्रायोजक
|
||||
|
||||
यदि ZeroClaw आपके लिए उपयोगी है, तो कृपया हमें एक कॉफी खरीदने पर विचार करें:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Nulla többletköltség. Nulla kompromisszum. 100% Rust. 100% Agnosztikus.</strong><br>
|
||||
⚡️ <strong>$10-es hardveren fut <5MB RAM-mal: Ez 99%-kal kevesebb memória, mint az OpenClaw és 98%-kal olcsóbb, mint egy Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Nyelvek:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Mi az a ZeroClaw?
|
||||
|
||||
A ZeroClaw egy könnyűsúlyú, változtatható és bővíthető AI asszisztens infrastruktúra, amely Rust nyelven készült. Különböző LLM szolgáltatókat (Anthropic, OpenAI, Google, Ollama stb.) köt össze egy egységes felületen keresztül, és több csatornát támogat (Telegram, Matrix, CLI stb.).
|
||||
|
||||
### Fő jellemzők
|
||||
|
||||
- **🦀 Rust nyelven írva**: Magas teljesítmény, memória biztonság és null költségű absztrakciók
|
||||
- **🔌 Szolgáltató-agnosztikus**: OpenAI, Anthropic, Google Gemini, Ollama és mások támogatása
|
||||
- **📱 Többcsatornás**: Telegram, Matrix (E2EE-vel), CLI és mások
|
||||
- **🧠 Cserélhető memória**: SQLite és Markdown backendek
|
||||
- **🛠️ Bővíthető eszközök**: Egyszerűen adjon hozzá egyedi eszközöket
|
||||
- **🔒 Biztonság először**: Fordított proxy, adatvédelem-elsődleges tervezés
|
||||
|
||||
---
|
||||
|
||||
## Gyors Kezdés
|
||||
|
||||
### Követelmények
|
||||
|
||||
- Rust 1.70+
|
||||
- Egy LLM szolgáltató API kulcs (Anthropic, OpenAI stb.)
|
||||
|
||||
### Telepítés
|
||||
|
||||
```bash
|
||||
# Klónozza a repositoryt
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# Építés
|
||||
cargo build --release
|
||||
|
||||
# Futtatás
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Docker-rel
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Konfiguráció
|
||||
|
||||
A ZeroClaw egy YAML konfigurációs fájlt használ. Alapértelmezés szerint a `config.yaml` fájlt keresi.
|
||||
|
||||
```yaml
|
||||
# Alapértelmezett szolgáltató
|
||||
provider: anthropic
|
||||
|
||||
# Szolgáltató konfiguráció
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# Memória konfiguráció
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# Csatorna konfiguráció
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dokumentáció
|
||||
|
||||
Részletes dokumentációért lásd:
|
||||
|
||||
- [Dokumentációs Központ](docs/README.md)
|
||||
- [Parancs Referencia](docs/commands-reference.md)
|
||||
- [Szolgáltató Referencia](docs/providers-reference.md)
|
||||
- [Csatorna Referencia](docs/channels-reference.md)
|
||||
- [Konfigurációs Referencia](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Hozzájárulás
|
||||
|
||||
A hozzájárulások várják! Kérjük, olvassa el a [Hozzájárulási Útmutatót](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
## Licenc
|
||||
|
||||
Ez a projekt kettős licencelt:
|
||||
|
||||
- MIT License
|
||||
- Apache License, 2.0 verzió
|
||||
|
||||
Részletekért lásd a [LICENSE-APACHE](LICENSE-APACHE) és [LICENSE-MIT](LICENSE-MIT) fájlokat.
|
||||
|
||||
---
|
||||
|
||||
## Közösség
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## Szponzorok
|
||||
|
||||
Ha a ZeroClaw hasznos az Ön számára, kérjük, fontolja meg, hogy vesz nekünk egy kávét:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
+179
@@ -0,0 +1,179 @@
|
||||
<p align="center">
|
||||
<img src="zeroclaw.png" alt="ZeroClaw" width="200" />
|
||||
</p>
|
||||
|
||||
<h1 align="center">ZeroClaw 🦀</h1>
|
||||
|
||||
<p align="center">
|
||||
<strong>Nol overhead. Nol kompromi. 100% Rust. 100% Agnostik.</strong><br>
|
||||
⚡️ <strong>Jalan di perangkat $10 dengan <5MB RAM: Itu 99% lebih sedikit memori dari OpenClaw dan 98% lebih murah dari Mac mini!</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="LICENSE-APACHE"><img src="https://img.shields.io/badge/license-MIT%20OR%20Apache%202.0-blue.svg" alt="License: MIT OR Apache-2.0" /></a>
|
||||
<a href="NOTICE"><img src="https://img.shields.io/badge/contributors-27+-green.svg" alt="Contributors" /></a>
|
||||
<a href="https://buymeacoffee.com/argenistherose"><img src="https://img.shields.io/badge/Buy%20Me%20a%20Coffee-Donate-yellow.svg?style=flat&logo=buy-me-a-coffee" alt="Buy Me a Coffee" /></a>
|
||||
<a href="https://x.com/zeroclawlabs?s=21"><img src="https://img.shields.io/badge/X-%40zeroclawlabs-000000?style=flat&logo=x&logoColor=white" alt="X: @zeroclawlabs" /></a>
|
||||
<a href="https://zeroclawlabs.cn/group.jpg"><img src="https://img.shields.io/badge/WeChat-Group-B7D7A8?logo=wechat&logoColor=white" alt="WeChat Group" /></a>
|
||||
<a href="https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search"><img src="https://img.shields.io/badge/Xiaohongshu-Official-FF2442?style=flat" alt="Xiaohongshu: Official" /></a>
|
||||
<a href="https://t.me/zeroclawlabs"><img src="https://img.shields.io/badge/Telegram-%40zeroclawlabs-26A5E4?style=flat&logo=telegram&logoColor=white" alt="Telegram: @zeroclawlabs" /></a>
|
||||
<a href="https://www.facebook.com/groups/zeroclaw"><img src="https://img.shields.io/badge/Facebook-Group-1877F2?style=flat&logo=facebook&logoColor=white" alt="Facebook Group" /></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
🌐 <strong>Bahasa:</strong>
|
||||
<a href="README.md">🇺🇸 English</a> ·
|
||||
<a href="README.zh-CN.md">🇨🇳 简体中文</a> ·
|
||||
<a href="README.ja.md">🇯🇵 日本語</a> ·
|
||||
<a href="README.ko.md">🇰🇷 한국어</a> ·
|
||||
<a href="README.vi.md">🇻🇳 Tiếng Việt</a> ·
|
||||
<a href="README.tl.md">🇵🇭 Tagalog</a> ·
|
||||
<a href="README.es.md">🇪🇸 Español</a> ·
|
||||
<a href="README.pt.md">🇧🇷 Português</a> ·
|
||||
<a href="README.it.md">🇮🇹 Italiano</a> ·
|
||||
<a href="README.de.md">🇩🇪 Deutsch</a> ·
|
||||
<a href="README.fr.md">🇫🇷 Français</a> ·
|
||||
<a href="README.ar.md">🇸🇦 العربية</a> ·
|
||||
<a href="README.hi.md">🇮🇳 हिन्दी</a> ·
|
||||
<a href="README.ru.md">🇷🇺 Русский</a> ·
|
||||
<a href="README.bn.md">🇧🇩 বাংলা</a> ·
|
||||
<a href="README.he.md">🇮🇱 עברית</a> ·
|
||||
<a href="README.pl.md">🇵🇱 Polski</a> ·
|
||||
<a href="README.cs.md">🇨🇿 Čeština</a> ·
|
||||
<a href="README.nl.md">🇳🇱 Nederlands</a> ·
|
||||
<a href="README.tr.md">🇹🇷 Türkçe</a> ·
|
||||
<a href="README.uk.md">🇺🇦 Українська</a> ·
|
||||
<a href="README.id.md">🇮🇩 Bahasa Indonesia</a> ·
|
||||
<a href="README.th.md">🇹🇭 ไทย</a> ·
|
||||
<a href="README.ur.md">🇵🇰 اردو</a> ·
|
||||
<a href="README.ro.md">🇷🇴 Română</a> ·
|
||||
<a href="README.sv.md">🇸🇪 Svenska</a> ·
|
||||
<a href="README.el.md">🇬🇷 Ελληνικά</a> ·
|
||||
<a href="README.hu.md">🇭🇺 Magyar</a> ·
|
||||
<a href="README.fi.md">🇫🇮 Suomi</a> ·
|
||||
<a href="README.da.md">🇩🇰 Dansk</a> ·
|
||||
<a href="README.nb.md">🇳🇴 Norsk</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## Apa itu ZeroClaw?
|
||||
|
||||
ZeroClaw adalah infrastruktur asisten AI yang ringan, dapat diubah, dan dapat diperluas yang dibangun dengan Rust. Ini menghubungkan berbagai penyedia LLM (Anthropic, OpenAI, Google, Ollama, dll.) melalui antarmuka terpadu dan mendukung banyak saluran (Telegram, Matrix, CLI, dll.).
|
||||
|
||||
### Fitur Utama
|
||||
|
||||
- **🦀 Ditulis dalam Rust**: Kinerja tinggi, keamanan memori, dan abstraksi tanpa biaya
|
||||
- **🔌 Agnostik penyedia**: Mendukung OpenAI, Anthropic, Google Gemini, Ollama, dan lainnya
|
||||
- **📱 Multi-saluran**: Telegram, Matrix (dengan E2EE), CLI, dan lainnya
|
||||
- **🧠 Memori yang dapat dipasang**: Backend SQLite dan Markdown
|
||||
- **🛠️ Alat yang dapat diperluas**: Tambahkan alat kustom dengan mudah
|
||||
- **🔒 Keamanan pertama**: Proxy terbalik, desain yang mengutamakan privasi
|
||||
|
||||
---
|
||||
|
||||
## Mulai Cepat
|
||||
|
||||
### Persyaratan
|
||||
|
||||
- Rust 1.70+
|
||||
- Kunci API penyedia LLM (Anthropic, OpenAI, dll.)
|
||||
|
||||
### Instalasi
|
||||
|
||||
```bash
|
||||
# Klon repositori
|
||||
git clone https://github.com/zeroclaw-labs/zeroclaw.git
|
||||
cd zeroclaw
|
||||
|
||||
# Bangun
|
||||
cargo build --release
|
||||
|
||||
# Jalankan
|
||||
cargo run --release
|
||||
```
|
||||
|
||||
### Dengan Docker
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name zeroclaw \
|
||||
-e ANTHROPIC_API_KEY=your_key \
|
||||
-v zeroclaw-data:/app/data \
|
||||
zeroclaw/zeroclaw:latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Konfigurasi
|
||||
|
||||
ZeroClaw menggunakan file konfigurasi YAML. Secara default, ini mencari `config.yaml`.
|
||||
|
||||
```yaml
|
||||
# Penyedia default
|
||||
provider: anthropic
|
||||
|
||||
# Konfigurasi penyedia
|
||||
providers:
|
||||
anthropic:
|
||||
api_key: ${ANTHROPIC_API_KEY}
|
||||
model: claude-3-5-sonnet-20241022
|
||||
openai:
|
||||
api_key: ${OPENAI_API_KEY}
|
||||
model: gpt-4o
|
||||
|
||||
# Konfigurasi memori
|
||||
memory:
|
||||
backend: sqlite
|
||||
path: data/memory.db
|
||||
|
||||
# Konfigurasi saluran
|
||||
channels:
|
||||
telegram:
|
||||
token: ${TELEGRAM_BOT_TOKEN}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dokumentasi
|
||||
|
||||
Untuk dokumentasi terperinci, lihat:
|
||||
|
||||
- [Hub Dokumentasi](docs/README.md)
|
||||
- [Referensi Perintah](docs/commands-reference.md)
|
||||
- [Referensi Penyedia](docs/providers-reference.md)
|
||||
- [Referensi Saluran](docs/channels-reference.md)
|
||||
- [Referensi Konfigurasi](docs/config-reference.md)
|
||||
|
||||
---
|
||||
|
||||
## Berkontribusi
|
||||
|
||||
Kontribusi diterima! Silakan baca [Panduan Kontribusi](CONTRIBUTING.md).
|
||||
|
||||
---
|
||||
|
||||
## Lisensi
|
||||
|
||||
Proyek ini dilisensikan ganda:
|
||||
|
||||
- MIT License
|
||||
- Apache License, versi 2.0
|
||||
|
||||
Lihat [LICENSE-APACHE](LICENSE-APACHE) dan [LICENSE-MIT](LICENSE-MIT) untuk detailnya.
|
||||
|
||||
---
|
||||
|
||||
## Komunitas
|
||||
|
||||
- [Telegram](https://t.me/zeroclawlabs)
|
||||
- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
|
||||
- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
|
||||
|
||||
---
|
||||
|
||||
## Sponsor
|
||||
|
||||
Jika ZeroClaw berguna bagi Anda, mohon pertimbangkan untuk membelikan kami kopi:
|
||||
|
||||
[](https://buymeacoffee.com/argenistherose)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user