From fa2faf408dca33ef1de41bb85e85b03b6d792f3c Mon Sep 17 00:00:00 2001
From: simianastronaut
Date: Sat, 7 Mar 2026 21:05:23 -0500
Subject: [PATCH 01/35] chore: update .gitignore, CODEOWNERS, and dependabot
configuration
- Add .zeroclaw/* to .gitignore to exclude ZeroClaw files from version control.
- Update CODEOWNERS to include @SimianAstronaut7 as a maintainer alongside @jordanthejet.
- Change dependabot target branch from dev to master for all update configurations.
- Revise master-branch-flow documentation to clarify active workflows and triggers.
---
.github/CODEOWNERS | 2 +-
.github/dependabot.yml | 6 +-
.github/workflows/README.md | 17 +-
.github/workflows/master-branch-flow.md | 261 +++++++-----------------
.gitignore | 1 +
5 files changed, 86 insertions(+), 201 deletions(-)
diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS
index 7bd42b5ac..9925ba495 100644
--- a/.github/CODEOWNERS
+++ b/.github/CODEOWNERS
@@ -1,5 +1,5 @@
# Default owner for all files
-* @jordanthejet
+* @jordanthejet @SimianAstronaut7
# Important functional modules
/src/agent/** @theonlyhennygod
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
index 166552a71..e797d7089 100644
--- a/.github/dependabot.yml
+++ b/.github/dependabot.yml
@@ -5,7 +5,7 @@ updates:
directory: "/"
schedule:
interval: daily
- target-branch: dev
+ target-branch: master
open-pull-requests-limit: 3
labels:
- "dependencies"
@@ -21,7 +21,7 @@ updates:
directory: "/"
schedule:
interval: daily
- target-branch: dev
+ target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
@@ -38,7 +38,7 @@ updates:
directory: "/"
schedule:
interval: daily
- target-branch: dev
+ target-branch: master
open-pull-requests-limit: 1
labels:
- "ci"
diff --git a/.github/workflows/README.md b/.github/workflows/README.md
index 8ccbcb769..9347bf356 100644
--- a/.github/workflows/README.md
+++ b/.github/workflows/README.md
@@ -10,21 +10,8 @@ Subdirectories are not valid locations for workflow entry files.
Repository convention:
1. Keep runnable workflow entry files at `.github/workflows/` root.
-2. Keep workflow-only helper scripts under `.github/workflows/scripts/`.
-3. Keep cross-tooling/local CI scripts under `scripts/ci/` when they are used outside Actions.
+2. Keep cross-tooling/local CI scripts under `dev/` or `scripts/ci/` when used outside Actions.
Workflow behavior documentation in this directory:
-- `.github/workflows/main-branch-flow.md`
-
-Current workflow helper scripts:
-
-- `.github/workflows/scripts/ci_workflow_owner_approval.js`
-- `.github/workflows/scripts/ci_license_file_owner_guard.js`
-- `.github/workflows/scripts/lint_feedback.js`
-- `.github/workflows/scripts/pr_auto_response_contributor_tier.js`
-- `.github/workflows/scripts/pr_auto_response_labeled_routes.js`
-- `.github/workflows/scripts/pr_check_status_nudge.js`
-- `.github/workflows/scripts/pr_intake_checks.js`
-- `.github/workflows/scripts/pr_labeler.js`
-- `.github/workflows/scripts/test_benchmarks_pr_comment.js`
+- `.github/workflows/master-branch-flow.md`
diff --git a/.github/workflows/master-branch-flow.md b/.github/workflows/master-branch-flow.md
index d790780bd..12d70a72e 100644
--- a/.github/workflows/master-branch-flow.md
+++ b/.github/workflows/master-branch-flow.md
@@ -14,174 +14,72 @@ ZeroClaw uses a single default branch: `master`. All contributor PRs target `mas
Current maintainers with PR approval authority: `theonlyhennygod` and `jordanthejet`.
+## Active Workflows
+
+| File | Trigger | Purpose |
+| --- | --- | --- |
+| `ci.yml` | `pull_request` → `master` | Test + build check on every PR |
+| `ci-full.yml` | `workflow_dispatch` | Full platform build matrix (manual) |
+| `release.yml` | `push` → `master` | Beta release on every master commit |
+| `promote-release.yml` | `workflow_dispatch` | Stable release (manual, version-gated) |
+
## Event Summary
-| Event | Main workflows |
+| Event | Workflows triggered |
| --- | --- |
-| PR activity (`pull_request_target`) | `pr-intake-checks.yml`, `pr-labeler.yml`, `pr-auto-response.yml` |
-| PR activity (`pull_request`) | `ci-run.yml`, `sec-audit.yml`, plus path-scoped workflows |
-| Push to `master` | `ci-run.yml`, `sec-audit.yml`, plus path-scoped workflows |
-| Tag push (`v*`) | `pub-release.yml` publish mode, `pub-docker-img.yml` publish job |
-| Scheduled/manual | `pub-release.yml` verification mode, `pub-homebrew-core.yml` (manual), `sec-codeql.yml`, `feature-matrix.yml`, `test-fuzz.yml`, `pr-check-stale.yml`, `pr-check-status.yml`, `sync-contributors.yml`, `test-benchmarks.yml`, `test-e2e.yml` |
-
-## Runtime and Docker Matrix
-
-Observed averages below are from recent completed runs (sampled from GitHub Actions on February 17, 2026). Values are directional, not SLA.
-
-| Workflow | Typical trigger in main flow | Avg runtime | Docker build? | Docker run? | Docker push? |
-| --- | --- | ---:| --- | --- | --- |
-| `pr-intake-checks.yml` | PR open/update (`pull_request_target`) | 14.5s | No | No | No |
-| `pr-labeler.yml` | PR open/update (`pull_request_target`) | 53.7s | No | No | No |
-| `pr-auto-response.yml` | PR/issue automation | 24.3s | No | No | No |
-| `ci-run.yml` | PR + push to `master` | 74.7s | No | No | No |
-| `sec-audit.yml` | PR + push to `master` | 127.2s | No | No | No |
-| `workflow-sanity.yml` | Workflow-file changes | 34.2s | No | No | No |
-| `pr-label-policy-check.yml` | Label policy/automation changes | 14.7s | No | No | No |
-| `pub-docker-img.yml` (`pull_request`) | Docker build-input PR changes | 240.4s | Yes | Yes | No |
-| `pub-docker-img.yml` (`push`) | tag push `v*` | 139.9s | Yes | No | Yes |
-| `pub-release.yml` | Tag push `v*` (publish) + manual/scheduled verification (no publish) | N/A in recent sample | No | No | No |
-| `pub-homebrew-core.yml` | Manual workflow dispatch only | N/A in recent sample | No | No | No |
-
-Notes:
-
-1. `pub-docker-img.yml` is the only workflow in the main PR/push path that builds Docker images.
-2. Container runtime verification (`docker run`) occurs in PR smoke only.
-3. Container registry push occurs on tag pushes (`v*`) only.
-4. `ci-run.yml` "Build (Smoke)" builds Rust binaries, not Docker images.
+| PR opened or updated against `master` | `ci.yml` |
+| Push to `master` (including after merge) | `release.yml` |
+| Manual dispatch | `ci-full.yml`, `promote-release.yml` |
## Step-By-Step
-### 1) PR from branch in this repository -> `master`
+### 1) PR → `master`
-1. Contributor opens or updates PR against `master`.
-2. `pull_request_target` automation runs (typical runtime):
- - `pr-intake-checks.yml` posts intake warnings/errors.
- - `pr-labeler.yml` sets size/risk/scope labels.
- - `pr-auto-response.yml` runs first-interaction and label routes.
-3. `pull_request` CI workflows start:
- - `ci-run.yml`
- - `sec-audit.yml`
- - path-scoped workflows if matching files changed:
- - `pub-docker-img.yml` (Docker build-input paths only)
- - `workflow-sanity.yml` (workflow files only)
- - `pr-label-policy-check.yml` (label-policy files only)
-4. In `ci-run.yml`, `changes` computes:
- - `docs_only`
- - `docs_changed`
- - `rust_changed`
- - `workflow_changed`
-5. `build` runs for Rust-impacting changes.
-6. On PRs, full lint/test/docs checks run when PR has label `ci:full`:
- - `lint`
- - `lint-strict-delta`
- - `test`
- - `docs-quality`
-7. If `.github/workflows/**` changed, `workflow-owner-approval` must pass.
-8. `lint-feedback` posts actionable comment if lint/docs gates fail.
-9. `CI Required Gate` aggregates results to final pass/fail.
-10. Maintainer (`theonlyhennygod` or `jordanthejet`) merges PR once checks and review policy are satisfied.
-11. Merge emits a `push` event on `master` (see scenario 3).
+1. Contributor opens or updates a PR against `master`.
+2. `ci.yml` starts:
+ - `test` job: runs `cargo nextest run --locked` on `ubuntu-latest` with Rust 1.92.0 and mold linker.
+ - `build` job (matrix): compiles release binary on `x86_64-unknown-linux-gnu` and `aarch64-apple-darwin`.
+ - Concurrency group cancels in-progress runs for the same PR on new pushes.
+3. All jobs must pass before merge.
+4. Maintainer (`theonlyhennygod` or `jordanthejet`) merges PR once checks and review policy are satisfied.
+5. Merge emits a `push` event on `master` (see section 2).
-### 2) PR from fork -> `master`
+### 2) Push to `master` (including after merge)
-1. External contributor opens PR from `fork/` into `zeroclaw:master`.
-2. Immediately on `opened`:
- - `pull_request_target` workflows start with base-repo context and base-repo token:
- - `pr-intake-checks.yml`
- - `pr-labeler.yml`
- - `pr-auto-response.yml`
- - `pull_request` workflows are queued for the fork head commit:
- - `ci-run.yml`
- - `sec-audit.yml`
- - path-scoped workflows (`pub-docker-img.yml`, `workflow-sanity.yml`, `pr-label-policy-check.yml`) if changed files match.
-3. Fork-specific permission behavior in `pull_request` workflows:
- - token is restricted (read-focused), so jobs that try to write PR comments/status extras can be limited.
- - secrets from the base repo are not exposed to fork PR `pull_request` jobs.
-4. Approval gate possibility:
- - if Actions settings require maintainer approval for fork workflows, the `pull_request` run stays in `action_required`/waiting state until approved.
-5. Event fan-out after labeling:
- - `pr-labeler.yml` and manual label changes emit `labeled`/`unlabeled` events.
- - those events retrigger `pull_request_target` automation (`pr-labeler.yml` and `pr-auto-response.yml`), creating extra run volume/noise.
-6. When contributor pushes new commits to fork branch (`synchronize`):
- - reruns: `pr-intake-checks.yml`, `pr-labeler.yml`, `ci-run.yml`, `sec-audit.yml`, and matching path-scoped PR workflows.
- - does not rerun `pr-auto-response.yml` unless label/open events occur.
-7. `ci-run.yml` execution details for fork PR:
- - `changes` computes `docs_only`, `docs_changed`, `rust_changed`, `workflow_changed`.
- - `build` runs for Rust-impacting changes.
- - `lint`/`lint-strict-delta`/`test`/`docs-quality` run on PR when `ci:full` label exists.
- - `workflow-owner-approval` runs when `.github/workflows/**` changed.
- - `CI Required Gate` emits final pass/fail for the PR head.
-8. Fork PR merge blockers to check first when diagnosing stalls:
- - run approval pending for fork workflows.
- - `workflow-owner-approval` failing on workflow-file changes.
- - `CI Required Gate` failure caused by upstream jobs.
- - repeated `pull_request_target` reruns from label churn causing noisy signals.
-9. After merge, normal `push` workflows on `master` execute (scenario 3).
+1. Commit reaches `master`.
+2. `release.yml` (Beta Release) starts:
+ - `version` job: computes beta tag as `v{cargo_version}-beta.{run_number}`.
+ - `build` job (matrix, 4 targets): `x86_64-linux`, `aarch64-linux`, `aarch64-darwin`, `x86_64-windows`.
+ - `publish` job: generates `SHA256SUMS`, creates a GitHub pre-release with all artifacts. Artifact retention: 7 days.
+ - `docker` job: builds multi-platform image (`linux/amd64,linux/arm64`) and pushes to `ghcr.io` with `:beta` and the versioned beta tag.
+3. This runs on every push to `master` without filtering. Every merged PR produces a beta pre-release.
-### 3) Push to `master` (including after merge)
+### 3) Stable Release (manual)
-1. Commit reaches `master` (usually from a merged PR).
-2. `ci-run.yml` runs on `push`.
-3. `sec-audit.yml` runs on `push`.
-4. Path-filtered workflows run only if touched files match their filters.
-5. In `ci-run.yml`, push behavior differs from PR behavior:
- - Rust path: `lint`, `lint-strict-delta`, `test`, `build` are expected.
- - Docs/non-rust paths: fast-path behavior applies.
-6. `CI Required Gate` computes overall push result.
+1. Maintainer runs `promote-release.yml` via `workflow_dispatch` with a version input (e.g. `0.2.0`).
+2. `validate` job checks:
+ - Input matches semver `X.Y.Z` format.
+ - `Cargo.toml` version matches input exactly.
+ - Tag `vX.Y.Z` does not already exist on the remote.
+3. `build` job (matrix, same 4 targets as beta): compiles release binary.
+4. `publish` job: generates `SHA256SUMS`, creates a stable GitHub Release (not pre-release). Artifact retention: 14 days.
+5. `docker` job: pushes to `ghcr.io` with `:latest` and `:vX.Y.Z`.
-## Docker Publish Logic
+### 4) Full Platform Build (manual)
-Workflow: `.github/workflows/pub-docker-img.yml`
+1. Maintainer runs `ci-full.yml` via `workflow_dispatch`.
+2. `build` job (matrix, 3 targets): `aarch64-linux-gnu`, `x86_64-darwin` (macOS 15 Intel), `x86_64-windows-msvc`.
+3. Build-only, no tests, no publish. Used to verify cross-compilation on platforms not covered by `ci.yml`.
-### PR behavior
+## Build Targets by Workflow
-1. Triggered on `pull_request` to `master` when Docker build-input paths change.
-2. Runs `PR Docker Smoke` job:
- - Builds local smoke image with Blacksmith builder.
- - Verifies container with `docker run ... --version`.
-3. Typical runtime in recent sample: ~240.4s.
-4. No registry push happens on PR events.
-
-### Push behavior
-
-1. `publish` job runs on tag pushes `v*` only.
-2. Workflow trigger includes semantic version tag pushes (`v*`) only.
-3. Login to `ghcr.io` uses `${{ github.actor }}` and `${{ secrets.GITHUB_TOKEN }}`.
-4. Tag computation includes semantic tag from pushed git tag (`vX.Y.Z`) + SHA tag.
-5. Multi-platform publish is used for tag pushes (`linux/amd64,linux/arm64`).
-6. Typical runtime in recent sample: ~139.9s.
-7. Result: pushed image tags under `ghcr.io//`.
-
-Important: Docker publish requires a `v*` tag push; regular `master` branch pushes do not publish images.
-
-## Release Logic
-
-Workflow: `.github/workflows/pub-release.yml`
-
-1. Trigger modes:
- - Tag push `v*` -> publish mode.
- - Manual dispatch -> verification-only or publish mode (input-driven).
- - Weekly schedule -> verification-only mode.
-2. `prepare` resolves release context (`release_ref`, `release_tag`, publish/draft mode) and validates manual publish inputs.
- - publish mode enforces `release_tag` == `Cargo.toml` version at the tag commit.
-3. `build-release` builds matrix artifacts across Linux/macOS/Windows targets.
-4. `verify-artifacts` enforces presence of all expected archives before any publish attempt.
-5. In publish mode, workflow generates SBOM (`CycloneDX` + `SPDX`), `SHA256SUMS`, keyless cosign signatures, and verifies GHCR release-tag availability.
-6. In publish mode, workflow creates/updates the GitHub Release for the resolved tag and commit-ish.
-
-Manual Homebrew formula flow:
-
-1. Run `.github/workflows/pub-homebrew-core.yml` with `release_tag=vX.Y.Z`.
-2. Use `dry_run=true` first to validate formula patch and metadata.
-3. Use `dry_run=false` to push from bot fork and open `homebrew-core` PR.
-
-## Merge/Policy Notes
-
-1. Workflow-file changes (`.github/workflows/**`) activate owner-approval gate in `ci-run.yml`.
-2. PR lint/test strictness is intentionally controlled by `ci:full` label.
-3. `sec-audit.yml` runs on both PR and push, plus scheduled weekly.
-4. Some workflows are operational and non-merge-path (`pr-check-stale`, `pr-check-status`, `sync-contributors`, etc.).
-5. Workflow-specific JavaScript helpers are organized under `.github/workflows/scripts/`.
+| Target | `ci.yml` | `ci-full.yml` | `release.yml` | `promote-release.yml` |
+| --- | :---: | :---: | :---: | :---: |
+| `x86_64-unknown-linux-gnu` | ✓ | | ✓ | ✓ |
+| `aarch64-unknown-linux-gnu` | | ✓ | ✓ | ✓ |
+| `aarch64-apple-darwin` | ✓ | | ✓ | ✓ |
+| `x86_64-apple-darwin` | | ✓ | | |
+| `x86_64-pc-windows-msvc` | | ✓ | ✓ | ✓ |
## Mermaid Diagrams
@@ -189,41 +87,40 @@ Manual Homebrew formula flow:
```mermaid
flowchart TD
- A["PR opened or updated -> master"] --> B["pull_request_target lane"]
- B --> B1["pr-intake-checks.yml"]
- B --> B2["pr-labeler.yml"]
- B --> B3["pr-auto-response.yml"]
- A --> C["pull_request CI lane"]
- C --> C1["ci-run.yml"]
- C --> C2["sec-audit.yml"]
- C --> C3["pub-docker-img.yml (if Docker paths changed)"]
- C --> C4["workflow-sanity.yml (if workflow files changed)"]
- C --> C5["pr-label-policy-check.yml (if policy files changed)"]
- C1 --> D["CI Required Gate"]
- D --> E{"Checks + review policy pass?"}
- E -->|No| F["PR stays open"]
- E -->|Yes| G["Merge PR"]
- G --> H["push event on master"]
+ A["PR opened or updated → master"] --> B["ci.yml"]
+ B --> B1["test: cargo nextest (ubuntu-latest)"]
+ B --> B2["build: x86_64-linux + aarch64-darwin"]
+ B1 & B2 --> C{"Checks pass?"}
+ C -->|No| D["PR stays open"]
+ C -->|Yes| E["Maintainer merges"]
+ E --> F["push event on master"]
```
-### Release
+### Beta Release (on every master push)
```mermaid
flowchart TD
- A["Commit reaches master"] --> B["ci-run.yml"]
- A --> C["sec-audit.yml"]
- A --> D["path-scoped workflows (if matched)"]
- T["Tag push v*"] --> R["pub-release.yml"]
- W["Manual/Scheduled release verify"] --> R
- T --> P["pub-docker-img.yml publish job"]
- R --> R1["Artifacts + SBOM + checksums + signatures + GitHub Release"]
- W --> R2["Verification build only (no GitHub Release publish)"]
- P --> P1["Push ghcr image tags (version + sha)"]
+ A["Push to master"] --> B["release.yml"]
+ B --> B1["version: compute v{x.y.z}-beta.{N}"]
+ B1 --> B2["build: 4 targets"]
+ B2 --> B3["publish: GitHub pre-release + SHA256SUMS"]
+ B2 --> B4["docker: push ghcr.io :beta + versioned tag"]
+```
+
+### Stable Release (manual)
+
+```mermaid
+flowchart TD
+ A["workflow_dispatch: version=X.Y.Z"] --> B["promote-release.yml"]
+ B --> B1["validate: semver + Cargo.toml + tag uniqueness"]
+ B1 --> B2["build: 4 targets"]
+ B2 --> B3["publish: GitHub stable release + SHA256SUMS"]
+ B2 --> B4["docker: push ghcr.io :latest + :vX.Y.Z"]
```
## Quick Troubleshooting
-1. Unexpected skipped jobs: inspect `scripts/ci/detect_change_scope.sh` outputs.
-2. Workflow-change PR blocked: verify `WORKFLOW_OWNER_LOGINS` and approvals.
-3. Fork PR appears stalled: check whether Actions run approval is pending.
-4. Docker not published: confirm a `v*` tag was pushed to the intended commit.
+1. **CI failing on PR**: check `test` job logs for test failures; check `build` job for compile errors.
+2. **Beta release not appearing**: confirm the push landed on `master` (not another branch); check `release.yml` run status.
+3. **Promote release failing at validate**: ensure `Cargo.toml` version matches the input version and the tag does not already exist.
+4. **Full matrix build needed**: run `ci-full.yml` manually from the Actions tab.
diff --git a/.gitignore b/.gitignore
index 89a1f8b5b..30fbb817d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -29,3 +29,4 @@ venv/
*.pem
credentials.json
.worktrees/
+.zeroclaw/*
From 8c0375a9ba792eacacb66049593f0c6deb6c3a07 Mon Sep 17 00:00:00 2001
From: simianastronaut
Date: Sat, 7 Mar 2026 21:39:10 -0500
Subject: [PATCH 02/35] Skill Creator (from anthropic)
---
.claude/skills/skill-creator/LICENSE.txt | 202 +++
.claude/skills/skill-creator/SKILL.md | 485 ++++++
.../skills/skill-creator/agents/analyzer.md | 274 ++++
.../skills/skill-creator/agents/comparator.md | 202 +++
.claude/skills/skill-creator/agents/grader.md | 223 +++
.../skill-creator/assets/eval_review.html | 146 ++
.../eval-viewer/generate_review.py | 471 ++++++
.../skill-creator/eval-viewer/viewer.html | 1325 +++++++++++++++++
.../skill-creator/references/schemas.md | 430 ++++++
.../skills/skill-creator/scripts/__init__.py | 0
.../scripts/aggregate_benchmark.py | 401 +++++
.../skill-creator/scripts/generate_report.py | 326 ++++
.../scripts/improve_description.py | 247 +++
.../skill-creator/scripts/package_skill.py | 136 ++
.../skill-creator/scripts/quick_validate.py | 103 ++
.../skills/skill-creator/scripts/run_eval.py | 310 ++++
.../skills/skill-creator/scripts/run_loop.py | 328 ++++
.claude/skills/skill-creator/scripts/utils.py | 47 +
18 files changed, 5656 insertions(+)
create mode 100644 .claude/skills/skill-creator/LICENSE.txt
create mode 100644 .claude/skills/skill-creator/SKILL.md
create mode 100644 .claude/skills/skill-creator/agents/analyzer.md
create mode 100644 .claude/skills/skill-creator/agents/comparator.md
create mode 100644 .claude/skills/skill-creator/agents/grader.md
create mode 100644 .claude/skills/skill-creator/assets/eval_review.html
create mode 100644 .claude/skills/skill-creator/eval-viewer/generate_review.py
create mode 100644 .claude/skills/skill-creator/eval-viewer/viewer.html
create mode 100644 .claude/skills/skill-creator/references/schemas.md
create mode 100644 .claude/skills/skill-creator/scripts/__init__.py
create mode 100755 .claude/skills/skill-creator/scripts/aggregate_benchmark.py
create mode 100755 .claude/skills/skill-creator/scripts/generate_report.py
create mode 100755 .claude/skills/skill-creator/scripts/improve_description.py
create mode 100755 .claude/skills/skill-creator/scripts/package_skill.py
create mode 100755 .claude/skills/skill-creator/scripts/quick_validate.py
create mode 100755 .claude/skills/skill-creator/scripts/run_eval.py
create mode 100755 .claude/skills/skill-creator/scripts/run_loop.py
create mode 100644 .claude/skills/skill-creator/scripts/utils.py
diff --git a/.claude/skills/skill-creator/LICENSE.txt b/.claude/skills/skill-creator/LICENSE.txt
new file mode 100644
index 000000000..7a4a3ea24
--- /dev/null
+++ b/.claude/skills/skill-creator/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/.claude/skills/skill-creator/SKILL.md b/.claude/skills/skill-creator/SKILL.md
new file mode 100644
index 000000000..65b3a402d
--- /dev/null
+++ b/.claude/skills/skill-creator/SKILL.md
@@ -0,0 +1,485 @@
+---
+name: skill-creator
+description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy.
+---
+
+# Skill Creator
+
+A skill for creating new skills and iteratively improving them.
+
+At a high level, the process of creating a skill goes like this:
+
+- Decide what you want the skill to do and roughly how it should do it
+- Write a draft of the skill
+- Create a few test prompts and run claude-with-access-to-the-skill on them
+- Help the user evaluate the results both qualitatively and quantitatively
+ - While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
+ - Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics
+- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
+- Repeat until you're satisfied
+- Expand the test set and try again at larger scale
+
+Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.
+
+On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.
+
+Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.
+
+Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.
+
+Cool? Cool.
+
+## Communicating with the user
+
+The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.
+
+So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:
+
+- "evaluation" and "benchmark" are borderline, but OK
+- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them
+
+It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.
+
+---
+
+## Creating a skill
+
+### Capture Intent
+
+Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.
+
+1. What should this skill enable Claude to do?
+2. When should this skill trigger? (what user phrases/contexts)
+3. What's the expected output format?
+4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.
+
+### Interview and Research
+
+Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.
+
+Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.
+
+### Write the SKILL.md
+
+Based on the user interview, fill in these components:
+
+- **name**: Skill identifier
+- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
+- **compatibility**: Required tools, dependencies (optional, rarely needed)
+- **the rest of the skill :)**
+
+### Skill Writing Guide
+
+#### Anatomy of a Skill
+
+```
+skill-name/
+├── SKILL.md (required)
+│ ├── YAML frontmatter (name, description required)
+│ └── Markdown instructions
+└── Bundled Resources (optional)
+ ├── scripts/ - Executable code for deterministic/repetitive tasks
+ ├── references/ - Docs loaded into context as needed
+ └── assets/ - Files used in output (templates, icons, fonts)
+```
+
+#### Progressive Disclosure
+
+Skills use a three-level loading system:
+1. **Metadata** (name + description) - Always in context (~100 words)
+2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal)
+3. **Bundled resources** - As needed (unlimited, scripts can execute without loading)
+
+These word counts are approximate and you can feel free to go longer if needed.
+
+**Key patterns:**
+- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up.
+- Reference files clearly from SKILL.md with guidance on when to read them
+- For large reference files (>300 lines), include a table of contents
+
+**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant:
+```
+cloud-deploy/
+├── SKILL.md (workflow + selection)
+└── references/
+ ├── aws.md
+ ├── gcp.md
+ └── azure.md
+```
+Claude reads only the relevant reference file.
+
+#### Principle of Lack of Surprise
+
+This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.
+
+#### Writing Patterns
+
+Prefer using the imperative form in instructions.
+
+**Defining output formats** - You can do it like this:
+```markdown
+## Report structure
+ALWAYS use this exact template:
+# [Title]
+## Executive summary
+## Key findings
+## Recommendations
+```
+
+**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):
+```markdown
+## Commit message format
+**Example 1:**
+Input: Added user authentication with JWT tokens
+Output: feat(auth): implement JWT-based authentication
+```
+
+### Writing Style
+
+Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.
+
+### Test Cases
+
+After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.
+
+Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress.
+
+```json
+{
+ "skill_name": "example-skill",
+ "evals": [
+ {
+ "id": 1,
+ "prompt": "User's task prompt",
+ "expected_output": "Description of expected result",
+ "files": []
+ }
+ ]
+}
+```
+
+See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later).
+
+## Running and evaluating test cases
+
+This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill.
+
+Put results in `-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go.
+
+### Step 1: Spawn all runs (with-skill AND baseline) in the same turn
+
+For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.
+
+**With-skill run:**
+
+```
+Execute this task:
+- Skill path:
+- Task:
+- Input files:
+- Save outputs to: /iteration-/eval-/with_skill/outputs/
+- Outputs to save:
+```
+
+**Baseline run** (same prompt, but the baseline depends on context):
+- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`.
+- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r /skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`.
+
+Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations.
+
+```json
+{
+ "eval_id": 0,
+ "eval_name": "descriptive-name-here",
+ "prompt": "The user's task prompt",
+ "assertions": []
+}
+```
+
+### Step 2: While runs are in progress, draft assertions
+
+Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check.
+
+Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment.
+
+Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark.
+
+### Step 3: As runs complete, capture timing data
+
+When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory:
+
+```json
+{
+ "total_tokens": 84852,
+ "duration_ms": 23332,
+ "total_duration_seconds": 23.3
+}
+```
+
+This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.
+
+### Step 4: Grade, aggregate, and launch the viewer
+
+Once all runs are done:
+
+1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations.
+
+2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory:
+ ```bash
+ python -m scripts.aggregate_benchmark /iteration-N --skill-name
+ ```
+ This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects.
+Put each with_skill version before its baseline counterpart.
+
+3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.
+
+4. **Launch the viewer** with both qualitative outputs and quantitative data:
+ ```bash
+ nohup python /eval-viewer/generate_review.py \
+ /iteration-N \
+ --skill-name "my-skill" \
+ --benchmark /iteration-N/benchmark.json \
+ > /dev/null 2>&1 &
+ VIEWER_PID=$!
+ ```
+ For iteration 2+, also pass `--previous-workspace /iteration-`.
+
+ **Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static ` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up.
+
+Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.
+
+5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."
+
+### What the user sees in the viewer
+
+The "Outputs" tab shows one test case at a time:
+- **Prompt**: the task that was given
+- **Output**: the files the skill produced, rendered inline where possible
+- **Previous Output** (iteration 2+): collapsed section showing last iteration's output
+- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail
+- **Feedback**: a textbox that auto-saves as they type
+- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox
+
+The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.
+
+Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`.
+
+### Step 5: Read the feedback
+
+When the user tells you they're done, read `feedback.json`:
+
+```json
+{
+ "reviews": [
+ {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
+ {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
+ {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
+ ],
+ "status": "complete"
+}
+```
+
+Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.
+
+Kill the viewer server when you're done with it:
+
+```bash
+kill $VIEWER_PID 2>/dev/null
+```
+
+---
+
+## Improving the skill
+
+This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.
+
+### How to think about improvements
+
+1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.
+
+2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.
+
+3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.
+
+4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel.
+
+This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.
+
+### The iteration loop
+
+After improving the skill:
+
+1. Apply your improvements to the skill
+2. Rerun all test cases into a new `iteration-/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
+3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration
+4. Wait for the user to review and tell you they're done
+5. Read the new feedback, improve again, repeat
+
+Keep going until:
+- The user says they're happy
+- The feedback is all empty (everything looks good)
+- You're not making meaningful progress
+
+---
+
+## Advanced: Blind comparison
+
+For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.
+
+This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.
+
+---
+
+## Description Optimization
+
+The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.
+
+### Step 1: Generate trigger eval queries
+
+Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON:
+
+```json
+[
+ {"query": "the user prompt", "should_trigger": true},
+ {"query": "another prompt", "should_trigger": false}
+]
+```
+
+The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).
+
+Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"`
+
+Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"`
+
+For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.
+
+For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.
+
+The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky.
+
+### Step 2: Review with user
+
+Present the eval set to the user for review using the HTML template:
+
+1. Read the template from `assets/eval_review.html`
+2. Replace the placeholders:
+ - `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment)
+ - `__SKILL_NAME_PLACEHOLDER__` → the skill's name
+ - `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description
+3. Write to a temp file (e.g., `/tmp/eval_review_.html`) and open it: `open /tmp/eval_review_.html`
+4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
+5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`)
+
+This step matters — bad eval queries lead to bad descriptions.
+
+### Step 3: Run the optimization loop
+
+Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically."
+
+Save the eval set to the workspace, then run in the background:
+
+```bash
+python -m scripts.run_loop \
+ --eval-set \
+ --skill-path \
+ --model \
+ --max-iterations 5 \
+ --verbose
+```
+
+Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.
+
+While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.
+
+This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting.
+
+### How skill triggering works
+
+Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.
+
+This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality.
+
+### Step 4: Apply the result
+
+Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.
+
+---
+
+### Package and Present (only if `present_files` tool is available)
+
+Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:
+
+```bash
+python -m scripts.package_skill
+```
+
+After packaging, direct the user to the resulting `.skill` file path so they can install it.
+
+---
+
+## Claude.ai-specific instructions
+
+In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:
+
+**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested.
+
+**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"
+
+**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.
+
+**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.
+
+**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai.
+
+**Blind comparison**: Requires subagents. Skip it.
+
+**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file.
+
+**Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. In this case:
+- **Preserve the original name.** Note the skill's directory name and `name` frontmatter field -- use them unchanged. E.g., if the installed skill is `research-helper`, output `research-helper.skill` (not `research-helper-v2`).
+- **Copy to a writeable location before editing.** The installed skill path may be read-only. Copy to `/tmp/skill-name/`, edit there, and package from the copy.
+- **If packaging manually, stage in `/tmp/` first**, then copy to the output directory -- direct writes may fail due to permissions.
+
+---
+
+## Cowork-Specific Instructions
+
+If you're in Cowork, the main things to know are:
+
+- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
+- You don't have a browser or display, so when generating the eval viewer, use `--static ` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
+- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP!
+- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first).
+- Packaging works — `package_skill.py` just needs Python and a filesystem.
+- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
+- **Updating an existing skill**: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.
+
+---
+
+## Reference files
+
+The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.
+
+- `agents/grader.md` — How to evaluate assertions against outputs
+- `agents/comparator.md` — How to do blind A/B comparison between two outputs
+- `agents/analyzer.md` — How to analyze why one version beat another
+
+The references/ directory has additional documentation:
+- `references/schemas.md` — JSON structures for evals.json, grading.json, etc.
+
+---
+
+Repeating one more time the core loop here for emphasis:
+
+- Figure out what the skill is about
+- Draft or edit the skill
+- Run claude-with-access-to-the-skill on test prompts
+- With the user, evaluate the outputs:
+ - Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them
+ - Run quantitative evals
+- Repeat until you and the user are satisfied
+- Package the final skill and return it to the user.
+
+Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens.
+
+Good luck!
diff --git a/.claude/skills/skill-creator/agents/analyzer.md b/.claude/skills/skill-creator/agents/analyzer.md
new file mode 100644
index 000000000..14e41d606
--- /dev/null
+++ b/.claude/skills/skill-creator/agents/analyzer.md
@@ -0,0 +1,274 @@
+# Post-hoc Analyzer Agent
+
+Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.
+
+## Role
+
+After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?
+
+## Inputs
+
+You receive these parameters in your prompt:
+
+- **winner**: "A" or "B" (from blind comparison)
+- **winner_skill_path**: Path to the skill that produced the winning output
+- **winner_transcript_path**: Path to the execution transcript for the winner
+- **loser_skill_path**: Path to the skill that produced the losing output
+- **loser_transcript_path**: Path to the execution transcript for the loser
+- **comparison_result_path**: Path to the blind comparator's output JSON
+- **output_path**: Where to save the analysis results
+
+## Process
+
+### Step 1: Read Comparison Result
+
+1. Read the blind comparator's output at comparison_result_path
+2. Note the winning side (A or B), the reasoning, and any scores
+3. Understand what the comparator valued in the winning output
+
+### Step 2: Read Both Skills
+
+1. Read the winner skill's SKILL.md and key referenced files
+2. Read the loser skill's SKILL.md and key referenced files
+3. Identify structural differences:
+ - Instructions clarity and specificity
+ - Script/tool usage patterns
+ - Example coverage
+ - Edge case handling
+
+### Step 3: Read Both Transcripts
+
+1. Read the winner's transcript
+2. Read the loser's transcript
+3. Compare execution patterns:
+ - How closely did each follow their skill's instructions?
+ - What tools were used differently?
+ - Where did the loser diverge from optimal behavior?
+ - Did either encounter errors or make recovery attempts?
+
+### Step 4: Analyze Instruction Following
+
+For each transcript, evaluate:
+- Did the agent follow the skill's explicit instructions?
+- Did the agent use the skill's provided tools/scripts?
+- Were there missed opportunities to leverage skill content?
+- Did the agent add unnecessary steps not in the skill?
+
+Score instruction following 1-10 and note specific issues.
+
+### Step 5: Identify Winner Strengths
+
+Determine what made the winner better:
+- Clearer instructions that led to better behavior?
+- Better scripts/tools that produced better output?
+- More comprehensive examples that guided edge cases?
+- Better error handling guidance?
+
+Be specific. Quote from skills/transcripts where relevant.
+
+### Step 6: Identify Loser Weaknesses
+
+Determine what held the loser back:
+- Ambiguous instructions that led to suboptimal choices?
+- Missing tools/scripts that forced workarounds?
+- Gaps in edge case coverage?
+- Poor error handling that caused failures?
+
+### Step 7: Generate Improvement Suggestions
+
+Based on the analysis, produce actionable suggestions for improving the loser skill:
+- Specific instruction changes to make
+- Tools/scripts to add or modify
+- Examples to include
+- Edge cases to address
+
+Prioritize by impact. Focus on changes that would have changed the outcome.
+
+### Step 8: Write Analysis Results
+
+Save structured analysis to `{output_path}`.
+
+## Output Format
+
+Write a JSON file with this structure:
+
+```json
+{
+ "comparison_summary": {
+ "winner": "A",
+ "winner_skill": "path/to/winner/skill",
+ "loser_skill": "path/to/loser/skill",
+ "comparator_reasoning": "Brief summary of why comparator chose winner"
+ },
+ "winner_strengths": [
+ "Clear step-by-step instructions for handling multi-page documents",
+ "Included validation script that caught formatting errors",
+ "Explicit guidance on fallback behavior when OCR fails"
+ ],
+ "loser_weaknesses": [
+ "Vague instruction 'process the document appropriately' led to inconsistent behavior",
+ "No script for validation, agent had to improvise and made errors",
+ "No guidance on OCR failure, agent gave up instead of trying alternatives"
+ ],
+ "instruction_following": {
+ "winner": {
+ "score": 9,
+ "issues": [
+ "Minor: skipped optional logging step"
+ ]
+ },
+ "loser": {
+ "score": 6,
+ "issues": [
+ "Did not use the skill's formatting template",
+ "Invented own approach instead of following step 3",
+ "Missed the 'always validate output' instruction"
+ ]
+ }
+ },
+ "improvement_suggestions": [
+ {
+ "priority": "high",
+ "category": "instructions",
+ "suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",
+ "expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
+ },
+ {
+ "priority": "high",
+ "category": "tools",
+ "suggestion": "Add validate_output.py script similar to winner skill's validation approach",
+ "expected_impact": "Would catch formatting errors before final output"
+ },
+ {
+ "priority": "medium",
+ "category": "error_handling",
+ "suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",
+ "expected_impact": "Would prevent early failure on difficult documents"
+ }
+ ],
+ "transcript_insights": {
+ "winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",
+ "loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"
+ }
+}
+```
+
+## Guidelines
+
+- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"
+- **Be actionable**: Suggestions should be concrete changes, not vague advice
+- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent
+- **Prioritize by impact**: Which changes would most likely have changed the outcome?
+- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?
+- **Stay objective**: Analyze what happened, don't editorialize
+- **Think about generalization**: Would this improvement help on other evals too?
+
+## Categories for Suggestions
+
+Use these categories to organize improvement suggestions:
+
+| Category | Description |
+|----------|-------------|
+| `instructions` | Changes to the skill's prose instructions |
+| `tools` | Scripts, templates, or utilities to add/modify |
+| `examples` | Example inputs/outputs to include |
+| `error_handling` | Guidance for handling failures |
+| `structure` | Reorganization of skill content |
+| `references` | External docs or resources to add |
+
+## Priority Levels
+
+- **high**: Would likely change the outcome of this comparison
+- **medium**: Would improve quality but may not change win/loss
+- **low**: Nice to have, marginal improvement
+
+---
+
+# Analyzing Benchmark Results
+
+When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.
+
+## Role
+
+Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.
+
+## Inputs
+
+You receive these parameters in your prompt:
+
+- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results
+- **skill_path**: Path to the skill being benchmarked
+- **output_path**: Where to save the notes (as JSON array of strings)
+
+## Process
+
+### Step 1: Read Benchmark Data
+
+1. Read the benchmark.json containing all run results
+2. Note the configurations tested (with_skill, without_skill)
+3. Understand the run_summary aggregates already calculated
+
+### Step 2: Analyze Per-Assertion Patterns
+
+For each expectation across all runs:
+- Does it **always pass** in both configurations? (may not differentiate skill value)
+- Does it **always fail** in both configurations? (may be broken or beyond capability)
+- Does it **always pass with skill but fail without**? (skill clearly adds value here)
+- Does it **always fail with skill but pass without**? (skill may be hurting)
+- Is it **highly variable**? (flaky expectation or non-deterministic behavior)
+
+### Step 3: Analyze Cross-Eval Patterns
+
+Look for patterns across evals:
+- Are certain eval types consistently harder/easier?
+- Do some evals show high variance while others are stable?
+- Are there surprising results that contradict expectations?
+
+### Step 4: Analyze Metrics Patterns
+
+Look at time_seconds, tokens, tool_calls:
+- Does the skill significantly increase execution time?
+- Is there high variance in resource usage?
+- Are there outlier runs that skew the aggregates?
+
+### Step 5: Generate Notes
+
+Write freeform observations as a list of strings. Each note should:
+- State a specific observation
+- Be grounded in the data (not speculation)
+- Help the user understand something the aggregate metrics don't show
+
+Examples:
+- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"
+- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"
+- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"
+- "Skill adds 13s average execution time but improves pass rate by 50%"
+- "Token usage is 80% higher with skill, primarily due to script output parsing"
+- "All 3 without-skill runs for eval 1 produced empty output"
+
+### Step 6: Write Notes
+
+Save notes to `{output_path}` as a JSON array of strings:
+
+```json
+[
+ "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
+ "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",
+ "Without-skill runs consistently fail on table extraction expectations",
+ "Skill adds 13s average execution time but improves pass rate by 50%"
+]
+```
+
+## Guidelines
+
+**DO:**
+- Report what you observe in the data
+- Be specific about which evals, expectations, or runs you're referring to
+- Note patterns that aggregate metrics would hide
+- Provide context that helps interpret the numbers
+
+**DO NOT:**
+- Suggest improvements to the skill (that's for the improvement step, not benchmarking)
+- Make subjective quality judgments ("the output was good/bad")
+- Speculate about causes without evidence
+- Repeat information already in the run_summary aggregates
diff --git a/.claude/skills/skill-creator/agents/comparator.md b/.claude/skills/skill-creator/agents/comparator.md
new file mode 100644
index 000000000..80e00eb45
--- /dev/null
+++ b/.claude/skills/skill-creator/agents/comparator.md
@@ -0,0 +1,202 @@
+# Blind Comparator Agent
+
+Compare two outputs WITHOUT knowing which skill produced them.
+
+## Role
+
+The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach.
+
+Your judgment is based purely on output quality and task completion.
+
+## Inputs
+
+You receive these parameters in your prompt:
+
+- **output_a_path**: Path to the first output file or directory
+- **output_b_path**: Path to the second output file or directory
+- **eval_prompt**: The original task/prompt that was executed
+- **expectations**: List of expectations to check (optional - may be empty)
+
+## Process
+
+### Step 1: Read Both Outputs
+
+1. Examine output A (file or directory)
+2. Examine output B (file or directory)
+3. Note the type, structure, and content of each
+4. If outputs are directories, examine all relevant files inside
+
+### Step 2: Understand the Task
+
+1. Read the eval_prompt carefully
+2. Identify what the task requires:
+ - What should be produced?
+ - What qualities matter (accuracy, completeness, format)?
+ - What would distinguish a good output from a poor one?
+
+### Step 3: Generate Evaluation Rubric
+
+Based on the task, generate a rubric with two dimensions:
+
+**Content Rubric** (what the output contains):
+| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
+|-----------|----------|----------------|---------------|
+| Correctness | Major errors | Minor errors | Fully correct |
+| Completeness | Missing key elements | Mostly complete | All elements present |
+| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout |
+
+**Structure Rubric** (how the output is organized):
+| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) |
+|-----------|----------|----------------|---------------|
+| Organization | Disorganized | Reasonably organized | Clear, logical structure |
+| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished |
+| Usability | Difficult to use | Usable with effort | Easy to use |
+
+Adapt criteria to the specific task. For example:
+- PDF form → "Field alignment", "Text readability", "Data placement"
+- Document → "Section structure", "Heading hierarchy", "Paragraph flow"
+- Data output → "Schema correctness", "Data types", "Completeness"
+
+### Step 4: Evaluate Each Output Against the Rubric
+
+For each output (A and B):
+
+1. **Score each criterion** on the rubric (1-5 scale)
+2. **Calculate dimension totals**: Content score, Structure score
+3. **Calculate overall score**: Average of dimension scores, scaled to 1-10
+
+### Step 5: Check Assertions (if provided)
+
+If expectations are provided:
+
+1. Check each expectation against output A
+2. Check each expectation against output B
+3. Count pass rates for each output
+4. Use expectation scores as secondary evidence (not the primary decision factor)
+
+### Step 6: Determine the Winner
+
+Compare A and B based on (in priority order):
+
+1. **Primary**: Overall rubric score (content + structure)
+2. **Secondary**: Assertion pass rates (if applicable)
+3. **Tiebreaker**: If truly equal, declare a TIE
+
+Be decisive - ties should be rare. One output is usually better, even if marginally.
+
+### Step 7: Write Comparison Results
+
+Save results to a JSON file at the path specified (or `comparison.json` if not specified).
+
+## Output Format
+
+Write a JSON file with this structure:
+
+```json
+{
+ "winner": "A",
+ "reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
+ "rubric": {
+ "A": {
+ "content": {
+ "correctness": 5,
+ "completeness": 5,
+ "accuracy": 4
+ },
+ "structure": {
+ "organization": 4,
+ "formatting": 5,
+ "usability": 4
+ },
+ "content_score": 4.7,
+ "structure_score": 4.3,
+ "overall_score": 9.0
+ },
+ "B": {
+ "content": {
+ "correctness": 3,
+ "completeness": 2,
+ "accuracy": 3
+ },
+ "structure": {
+ "organization": 3,
+ "formatting": 2,
+ "usability": 3
+ },
+ "content_score": 2.7,
+ "structure_score": 2.7,
+ "overall_score": 5.4
+ }
+ },
+ "output_quality": {
+ "A": {
+ "score": 9,
+ "strengths": ["Complete solution", "Well-formatted", "All fields present"],
+ "weaknesses": ["Minor style inconsistency in header"]
+ },
+ "B": {
+ "score": 5,
+ "strengths": ["Readable output", "Correct basic structure"],
+ "weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
+ }
+ },
+ "expectation_results": {
+ "A": {
+ "passed": 4,
+ "total": 5,
+ "pass_rate": 0.80,
+ "details": [
+ {"text": "Output includes name", "passed": true},
+ {"text": "Output includes date", "passed": true},
+ {"text": "Format is PDF", "passed": true},
+ {"text": "Contains signature", "passed": false},
+ {"text": "Readable text", "passed": true}
+ ]
+ },
+ "B": {
+ "passed": 3,
+ "total": 5,
+ "pass_rate": 0.60,
+ "details": [
+ {"text": "Output includes name", "passed": true},
+ {"text": "Output includes date", "passed": false},
+ {"text": "Format is PDF", "passed": true},
+ {"text": "Contains signature", "passed": false},
+ {"text": "Readable text", "passed": true}
+ ]
+ }
+ }
+}
+```
+
+If no expectations were provided, omit the `expectation_results` field entirely.
+
+## Field Descriptions
+
+- **winner**: "A", "B", or "TIE"
+- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie)
+- **rubric**: Structured rubric evaluation for each output
+ - **content**: Scores for content criteria (correctness, completeness, accuracy)
+ - **structure**: Scores for structure criteria (organization, formatting, usability)
+ - **content_score**: Average of content criteria (1-5)
+ - **structure_score**: Average of structure criteria (1-5)
+ - **overall_score**: Combined score scaled to 1-10
+- **output_quality**: Summary quality assessment
+ - **score**: 1-10 rating (should match rubric overall_score)
+ - **strengths**: List of positive aspects
+ - **weaknesses**: List of issues or shortcomings
+- **expectation_results**: (Only if expectations provided)
+ - **passed**: Number of expectations that passed
+ - **total**: Total number of expectations
+ - **pass_rate**: Fraction passed (0.0 to 1.0)
+ - **details**: Individual expectation results
+
+## Guidelines
+
+- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality.
+- **Be specific**: Cite specific examples when explaining strengths and weaknesses.
+- **Be decisive**: Choose a winner unless outputs are genuinely equivalent.
+- **Output quality first**: Assertion scores are secondary to overall task completion.
+- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness.
+- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner.
+- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better.
diff --git a/.claude/skills/skill-creator/agents/grader.md b/.claude/skills/skill-creator/agents/grader.md
new file mode 100644
index 000000000..558ab05c0
--- /dev/null
+++ b/.claude/skills/skill-creator/agents/grader.md
@@ -0,0 +1,223 @@
+# Grader Agent
+
+Evaluate expectations against an execution transcript and outputs.
+
+## Role
+
+The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment.
+
+You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so.
+
+## Inputs
+
+You receive these parameters in your prompt:
+
+- **expectations**: List of expectations to evaluate (strings)
+- **transcript_path**: Path to the execution transcript (markdown file)
+- **outputs_dir**: Directory containing output files from execution
+
+## Process
+
+### Step 1: Read the Transcript
+
+1. Read the transcript file completely
+2. Note the eval prompt, execution steps, and final result
+3. Identify any issues or errors documented
+
+### Step 2: Examine Output Files
+
+1. List files in outputs_dir
+2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced.
+3. Note contents, structure, and quality
+
+### Step 3: Evaluate Each Assertion
+
+For each expectation:
+
+1. **Search for evidence** in the transcript and outputs
+2. **Determine verdict**:
+ - **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance
+ - **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content)
+3. **Cite the evidence**: Quote the specific text or describe what you found
+
+### Step 4: Extract and Verify Claims
+
+Beyond the predefined expectations, extract implicit claims from the outputs and verify them:
+
+1. **Extract claims** from the transcript and outputs:
+ - Factual statements ("The form has 12 fields")
+ - Process claims ("Used pypdf to fill the form")
+ - Quality claims ("All fields were filled correctly")
+
+2. **Verify each claim**:
+ - **Factual claims**: Can be checked against the outputs or external sources
+ - **Process claims**: Can be verified from the transcript
+ - **Quality claims**: Evaluate whether the claim is justified
+
+3. **Flag unverifiable claims**: Note claims that cannot be verified with available information
+
+This catches issues that predefined expectations might miss.
+
+### Step 5: Read User Notes
+
+If `{outputs_dir}/user_notes.md` exists:
+1. Read it and note any uncertainties or issues flagged by the executor
+2. Include relevant concerns in the grading output
+3. These may reveal problems even when expectations pass
+
+### Step 6: Critique the Evals
+
+After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap.
+
+Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't.
+
+Suggestions worth raising:
+- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content)
+- An important outcome you observed — good or bad — that no assertion covers at all
+- An assertion that can't actually be verified from the available outputs
+
+Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion.
+
+### Step 7: Write Grading Results
+
+Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir).
+
+## Grading Criteria
+
+**PASS when**:
+- The transcript or outputs clearly demonstrate the expectation is true
+- Specific evidence can be cited
+- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename)
+
+**FAIL when**:
+- No evidence found for the expectation
+- Evidence contradicts the expectation
+- The expectation cannot be verified from available information
+- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete
+- The output appears to meet the assertion by coincidence rather than by actually doing the work
+
+**When uncertain**: The burden of proof to pass is on the expectation.
+
+### Step 8: Read Executor Metrics and Timing
+
+1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output
+2. If `{outputs_dir}/../timing.json` exists, read it and include timing data
+
+## Output Format
+
+Write a JSON file with this structure:
+
+```json
+{
+ "expectations": [
+ {
+ "text": "The output includes the name 'John Smith'",
+ "passed": true,
+ "evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
+ },
+ {
+ "text": "The spreadsheet has a SUM formula in cell B10",
+ "passed": false,
+ "evidence": "No spreadsheet was created. The output was a text file."
+ },
+ {
+ "text": "The assistant used the skill's OCR script",
+ "passed": true,
+ "evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'"
+ }
+ ],
+ "summary": {
+ "passed": 2,
+ "failed": 1,
+ "total": 3,
+ "pass_rate": 0.67
+ },
+ "execution_metrics": {
+ "tool_calls": {
+ "Read": 5,
+ "Write": 2,
+ "Bash": 8
+ },
+ "total_tool_calls": 15,
+ "total_steps": 6,
+ "errors_encountered": 0,
+ "output_chars": 12450,
+ "transcript_chars": 3200
+ },
+ "timing": {
+ "executor_duration_seconds": 165.0,
+ "grader_duration_seconds": 26.0,
+ "total_duration_seconds": 191.0
+ },
+ "claims": [
+ {
+ "claim": "The form has 12 fillable fields",
+ "type": "factual",
+ "verified": true,
+ "evidence": "Counted 12 fields in field_info.json"
+ },
+ {
+ "claim": "All required fields were populated",
+ "type": "quality",
+ "verified": false,
+ "evidence": "Reference section was left blank despite data being available"
+ }
+ ],
+ "user_notes_summary": {
+ "uncertainties": ["Used 2023 data, may be stale"],
+ "needs_review": [],
+ "workarounds": ["Fell back to text overlay for non-fillable fields"]
+ },
+ "eval_feedback": {
+ "suggestions": [
+ {
+ "assertion": "The output includes the name 'John Smith'",
+ "reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input"
+ },
+ {
+ "reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught"
+ }
+ ],
+ "overall": "Assertions check presence but not correctness. Consider adding content verification."
+ }
+}
+```
+
+## Field Descriptions
+
+- **expectations**: Array of graded expectations
+ - **text**: The original expectation text
+ - **passed**: Boolean - true if expectation passes
+ - **evidence**: Specific quote or description supporting the verdict
+- **summary**: Aggregate statistics
+ - **passed**: Count of passed expectations
+ - **failed**: Count of failed expectations
+ - **total**: Total expectations evaluated
+ - **pass_rate**: Fraction passed (0.0 to 1.0)
+- **execution_metrics**: Copied from executor's metrics.json (if available)
+ - **output_chars**: Total character count of output files (proxy for tokens)
+ - **transcript_chars**: Character count of transcript
+- **timing**: Wall clock timing from timing.json (if available)
+ - **executor_duration_seconds**: Time spent in executor subagent
+ - **total_duration_seconds**: Total elapsed time for the run
+- **claims**: Extracted and verified claims from the output
+ - **claim**: The statement being verified
+ - **type**: "factual", "process", or "quality"
+ - **verified**: Boolean - whether the claim holds
+ - **evidence**: Supporting or contradicting evidence
+- **user_notes_summary**: Issues flagged by the executor
+ - **uncertainties**: Things the executor wasn't sure about
+ - **needs_review**: Items requiring human attention
+ - **workarounds**: Places where the skill didn't work as expected
+- **eval_feedback**: Improvement suggestions for the evals (only when warranted)
+ - **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to
+ - **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag
+
+## Guidelines
+
+- **Be objective**: Base verdicts on evidence, not assumptions
+- **Be specific**: Quote the exact text that supports your verdict
+- **Be thorough**: Check both transcript and output files
+- **Be consistent**: Apply the same standard to each expectation
+- **Explain failures**: Make it clear why evidence was insufficient
+- **No partial credit**: Each expectation is pass or fail, not partial
diff --git a/.claude/skills/skill-creator/assets/eval_review.html b/.claude/skills/skill-creator/assets/eval_review.html
new file mode 100644
index 000000000..938ff32ae
--- /dev/null
+++ b/.claude/skills/skill-creator/assets/eval_review.html
@@ -0,0 +1,146 @@
+
+
+
+
+
+ Eval Set Review - __SKILL_NAME_PLACEHOLDER__
+
+
+
+
+
+
+
Eval Set Review: __SKILL_NAME_PLACEHOLDER__
+
Current description: __SKILL_DESCRIPTION_PLACEHOLDER__
+
+
+
+
+
+
+
+
+
+
Query
+
Should Trigger
+
Actions
+
+
+
+
+
+
+
+
+
+
diff --git a/.claude/skills/skill-creator/eval-viewer/generate_review.py b/.claude/skills/skill-creator/eval-viewer/generate_review.py
new file mode 100644
index 000000000..7fa597863
--- /dev/null
+++ b/.claude/skills/skill-creator/eval-viewer/generate_review.py
@@ -0,0 +1,471 @@
+#!/usr/bin/env python3
+"""Generate and serve a review page for eval results.
+
+Reads the workspace directory, discovers runs (directories with outputs/),
+embeds all output data into a self-contained HTML page, and serves it via
+a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace.
+
+Usage:
+ python generate_review.py [--port PORT] [--skill-name NAME]
+ python generate_review.py --previous-feedback /path/to/old/feedback.json
+
+No dependencies beyond the Python stdlib are required.
+"""
+
+import argparse
+import base64
+import json
+import mimetypes
+import os
+import re
+import signal
+import subprocess
+import sys
+import time
+import webbrowser
+from functools import partial
+from http.server import HTTPServer, BaseHTTPRequestHandler
+from pathlib import Path
+
+# Files to exclude from output listings
+METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"}
+
+# Extensions we render as inline text
+TEXT_EXTENSIONS = {
+ ".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx",
+ ".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs",
+ ".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml",
+}
+
+# Extensions we render as inline images
+IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"}
+
+# MIME type overrides for common types
+MIME_OVERRIDES = {
+ ".svg": "image/svg+xml",
+ ".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
+ ".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
+ ".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
+}
+
+
+def get_mime_type(path: Path) -> str:
+ ext = path.suffix.lower()
+ if ext in MIME_OVERRIDES:
+ return MIME_OVERRIDES[ext]
+ mime, _ = mimetypes.guess_type(str(path))
+ return mime or "application/octet-stream"
+
+
+def find_runs(workspace: Path) -> list[dict]:
+ """Recursively find directories that contain an outputs/ subdirectory."""
+ runs: list[dict] = []
+ _find_runs_recursive(workspace, workspace, runs)
+ runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"]))
+ return runs
+
+
+def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None:
+ if not current.is_dir():
+ return
+
+ outputs_dir = current / "outputs"
+ if outputs_dir.is_dir():
+ run = build_run(root, current)
+ if run:
+ runs.append(run)
+ return
+
+ skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"}
+ for child in sorted(current.iterdir()):
+ if child.is_dir() and child.name not in skip:
+ _find_runs_recursive(root, child, runs)
+
+
+def build_run(root: Path, run_dir: Path) -> dict | None:
+ """Build a run dict with prompt, outputs, and grading data."""
+ prompt = ""
+ eval_id = None
+
+ # Try eval_metadata.json
+ for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]:
+ if candidate.exists():
+ try:
+ metadata = json.loads(candidate.read_text())
+ prompt = metadata.get("prompt", "")
+ eval_id = metadata.get("eval_id")
+ except (json.JSONDecodeError, OSError):
+ pass
+ if prompt:
+ break
+
+ # Fall back to transcript.md
+ if not prompt:
+ for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]:
+ if candidate.exists():
+ try:
+ text = candidate.read_text()
+ match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text)
+ if match:
+ prompt = match.group(1).strip()
+ except OSError:
+ pass
+ if prompt:
+ break
+
+ if not prompt:
+ prompt = "(No prompt found)"
+
+ run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-")
+
+ # Collect output files
+ outputs_dir = run_dir / "outputs"
+ output_files: list[dict] = []
+ if outputs_dir.is_dir():
+ for f in sorted(outputs_dir.iterdir()):
+ if f.is_file() and f.name not in METADATA_FILES:
+ output_files.append(embed_file(f))
+
+ # Load grading if present
+ grading = None
+ for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]:
+ if candidate.exists():
+ try:
+ grading = json.loads(candidate.read_text())
+ except (json.JSONDecodeError, OSError):
+ pass
+ if grading:
+ break
+
+ return {
+ "id": run_id,
+ "prompt": prompt,
+ "eval_id": eval_id,
+ "outputs": output_files,
+ "grading": grading,
+ }
+
+
+def embed_file(path: Path) -> dict:
+ """Read a file and return an embedded representation."""
+ ext = path.suffix.lower()
+ mime = get_mime_type(path)
+
+ if ext in TEXT_EXTENSIONS:
+ try:
+ content = path.read_text(errors="replace")
+ except OSError:
+ content = "(Error reading file)"
+ return {
+ "name": path.name,
+ "type": "text",
+ "content": content,
+ }
+ elif ext in IMAGE_EXTENSIONS:
+ try:
+ raw = path.read_bytes()
+ b64 = base64.b64encode(raw).decode("ascii")
+ except OSError:
+ return {"name": path.name, "type": "error", "content": "(Error reading file)"}
+ return {
+ "name": path.name,
+ "type": "image",
+ "mime": mime,
+ "data_uri": f"data:{mime};base64,{b64}",
+ }
+ elif ext == ".pdf":
+ try:
+ raw = path.read_bytes()
+ b64 = base64.b64encode(raw).decode("ascii")
+ except OSError:
+ return {"name": path.name, "type": "error", "content": "(Error reading file)"}
+ return {
+ "name": path.name,
+ "type": "pdf",
+ "data_uri": f"data:{mime};base64,{b64}",
+ }
+ elif ext == ".xlsx":
+ try:
+ raw = path.read_bytes()
+ b64 = base64.b64encode(raw).decode("ascii")
+ except OSError:
+ return {"name": path.name, "type": "error", "content": "(Error reading file)"}
+ return {
+ "name": path.name,
+ "type": "xlsx",
+ "data_b64": b64,
+ }
+ else:
+ # Binary / unknown — base64 download link
+ try:
+ raw = path.read_bytes()
+ b64 = base64.b64encode(raw).decode("ascii")
+ except OSError:
+ return {"name": path.name, "type": "error", "content": "(Error reading file)"}
+ return {
+ "name": path.name,
+ "type": "binary",
+ "mime": mime,
+ "data_uri": f"data:{mime};base64,{b64}",
+ }
+
+
+def load_previous_iteration(workspace: Path) -> dict[str, dict]:
+ """Load previous iteration's feedback and outputs.
+
+ Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}.
+ """
+ result: dict[str, dict] = {}
+
+ # Load feedback
+ feedback_map: dict[str, str] = {}
+ feedback_path = workspace / "feedback.json"
+ if feedback_path.exists():
+ try:
+ data = json.loads(feedback_path.read_text())
+ feedback_map = {
+ r["run_id"]: r["feedback"]
+ for r in data.get("reviews", [])
+ if r.get("feedback", "").strip()
+ }
+ except (json.JSONDecodeError, OSError, KeyError):
+ pass
+
+ # Load runs (to get outputs)
+ prev_runs = find_runs(workspace)
+ for run in prev_runs:
+ result[run["id"]] = {
+ "feedback": feedback_map.get(run["id"], ""),
+ "outputs": run.get("outputs", []),
+ }
+
+ # Also add feedback for run_ids that had feedback but no matching run
+ for run_id, fb in feedback_map.items():
+ if run_id not in result:
+ result[run_id] = {"feedback": fb, "outputs": []}
+
+ return result
+
+
+def generate_html(
+ runs: list[dict],
+ skill_name: str,
+ previous: dict[str, dict] | None = None,
+ benchmark: dict | None = None,
+) -> str:
+ """Generate the complete standalone HTML page with embedded data."""
+ template_path = Path(__file__).parent / "viewer.html"
+ template = template_path.read_text()
+
+ # Build previous_feedback and previous_outputs maps for the template
+ previous_feedback: dict[str, str] = {}
+ previous_outputs: dict[str, list[dict]] = {}
+ if previous:
+ for run_id, data in previous.items():
+ if data.get("feedback"):
+ previous_feedback[run_id] = data["feedback"]
+ if data.get("outputs"):
+ previous_outputs[run_id] = data["outputs"]
+
+ embedded = {
+ "skill_name": skill_name,
+ "runs": runs,
+ "previous_feedback": previous_feedback,
+ "previous_outputs": previous_outputs,
+ }
+ if benchmark:
+ embedded["benchmark"] = benchmark
+
+ data_json = json.dumps(embedded)
+
+ return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};")
+
+
+# ---------------------------------------------------------------------------
+# HTTP server (stdlib only, zero dependencies)
+# ---------------------------------------------------------------------------
+
+def _kill_port(port: int) -> None:
+ """Kill any process listening on the given port."""
+ try:
+ result = subprocess.run(
+ ["lsof", "-ti", f":{port}"],
+ capture_output=True, text=True, timeout=5,
+ )
+ for pid_str in result.stdout.strip().split("\n"):
+ if pid_str.strip():
+ try:
+ os.kill(int(pid_str.strip()), signal.SIGTERM)
+ except (ProcessLookupError, ValueError):
+ pass
+ if result.stdout.strip():
+ time.sleep(0.5)
+ except subprocess.TimeoutExpired:
+ pass
+ except FileNotFoundError:
+ print("Note: lsof not found, cannot check if port is in use", file=sys.stderr)
+
+class ReviewHandler(BaseHTTPRequestHandler):
+ """Serves the review HTML and handles feedback saves.
+
+ Regenerates the HTML on each page load so that refreshing the browser
+ picks up new eval outputs without restarting the server.
+ """
+
+ def __init__(
+ self,
+ workspace: Path,
+ skill_name: str,
+ feedback_path: Path,
+ previous: dict[str, dict],
+ benchmark_path: Path | None,
+ *args,
+ **kwargs,
+ ):
+ self.workspace = workspace
+ self.skill_name = skill_name
+ self.feedback_path = feedback_path
+ self.previous = previous
+ self.benchmark_path = benchmark_path
+ super().__init__(*args, **kwargs)
+
+ def do_GET(self) -> None:
+ if self.path == "/" or self.path == "/index.html":
+ # Regenerate HTML on each request (re-scans workspace for new outputs)
+ runs = find_runs(self.workspace)
+ benchmark = None
+ if self.benchmark_path and self.benchmark_path.exists():
+ try:
+ benchmark = json.loads(self.benchmark_path.read_text())
+ except (json.JSONDecodeError, OSError):
+ pass
+ html = generate_html(runs, self.skill_name, self.previous, benchmark)
+ content = html.encode("utf-8")
+ self.send_response(200)
+ self.send_header("Content-Type", "text/html; charset=utf-8")
+ self.send_header("Content-Length", str(len(content)))
+ self.end_headers()
+ self.wfile.write(content)
+ elif self.path == "/api/feedback":
+ data = b"{}"
+ if self.feedback_path.exists():
+ data = self.feedback_path.read_bytes()
+ self.send_response(200)
+ self.send_header("Content-Type", "application/json")
+ self.send_header("Content-Length", str(len(data)))
+ self.end_headers()
+ self.wfile.write(data)
+ else:
+ self.send_error(404)
+
+ def do_POST(self) -> None:
+ if self.path == "/api/feedback":
+ length = int(self.headers.get("Content-Length", 0))
+ body = self.rfile.read(length)
+ try:
+ data = json.loads(body)
+ if not isinstance(data, dict) or "reviews" not in data:
+ raise ValueError("Expected JSON object with 'reviews' key")
+ self.feedback_path.write_text(json.dumps(data, indent=2) + "\n")
+ resp = b'{"ok":true}'
+ self.send_response(200)
+ except (json.JSONDecodeError, OSError, ValueError) as e:
+ resp = json.dumps({"error": str(e)}).encode()
+ self.send_response(500)
+ self.send_header("Content-Type", "application/json")
+ self.send_header("Content-Length", str(len(resp)))
+ self.end_headers()
+ self.wfile.write(resp)
+ else:
+ self.send_error(404)
+
+ def log_message(self, format: str, *args: object) -> None:
+ # Suppress request logging to keep terminal clean
+ pass
+
+
+def main() -> None:
+ parser = argparse.ArgumentParser(description="Generate and serve eval review")
+ parser.add_argument("workspace", type=Path, help="Path to workspace directory")
+ parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)")
+ parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header")
+ parser.add_argument(
+ "--previous-workspace", type=Path, default=None,
+ help="Path to previous iteration's workspace (shows old outputs and feedback as context)",
+ )
+ parser.add_argument(
+ "--benchmark", type=Path, default=None,
+ help="Path to benchmark.json to show in the Benchmark tab",
+ )
+ parser.add_argument(
+ "--static", "-s", type=Path, default=None,
+ help="Write standalone HTML to this path instead of starting a server",
+ )
+ args = parser.parse_args()
+
+ workspace = args.workspace.resolve()
+ if not workspace.is_dir():
+ print(f"Error: {workspace} is not a directory", file=sys.stderr)
+ sys.exit(1)
+
+ runs = find_runs(workspace)
+ if not runs:
+ print(f"No runs found in {workspace}", file=sys.stderr)
+ sys.exit(1)
+
+ skill_name = args.skill_name or workspace.name.replace("-workspace", "")
+ feedback_path = workspace / "feedback.json"
+
+ previous: dict[str, dict] = {}
+ if args.previous_workspace:
+ previous = load_previous_iteration(args.previous_workspace.resolve())
+
+ benchmark_path = args.benchmark.resolve() if args.benchmark else None
+ benchmark = None
+ if benchmark_path and benchmark_path.exists():
+ try:
+ benchmark = json.loads(benchmark_path.read_text())
+ except (json.JSONDecodeError, OSError):
+ pass
+
+ if args.static:
+ html = generate_html(runs, skill_name, previous, benchmark)
+ args.static.parent.mkdir(parents=True, exist_ok=True)
+ args.static.write_text(html)
+ print(f"\n Static viewer written to: {args.static}\n")
+ sys.exit(0)
+
+ # Kill any existing process on the target port
+ port = args.port
+ _kill_port(port)
+ handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path)
+ try:
+ server = HTTPServer(("127.0.0.1", port), handler)
+ except OSError:
+ # Port still in use after kill attempt — find a free one
+ server = HTTPServer(("127.0.0.1", 0), handler)
+ port = server.server_address[1]
+
+ url = f"http://localhost:{port}"
+ print(f"\n Eval Viewer")
+ print(f" ─────────────────────────────────")
+ print(f" URL: {url}")
+ print(f" Workspace: {workspace}")
+ print(f" Feedback: {feedback_path}")
+ if previous:
+ print(f" Previous: {args.previous_workspace} ({len(previous)} runs)")
+ if benchmark_path:
+ print(f" Benchmark: {benchmark_path}")
+ print(f"\n Press Ctrl+C to stop.\n")
+
+ webbrowser.open(url)
+
+ try:
+ server.serve_forever()
+ except KeyboardInterrupt:
+ print("\nStopped.")
+ server.server_close()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/eval-viewer/viewer.html b/.claude/skills/skill-creator/eval-viewer/viewer.html
new file mode 100644
index 000000000..6d8e96348
--- /dev/null
+++ b/.claude/skills/skill-creator/eval-viewer/viewer.html
@@ -0,0 +1,1325 @@
+
+
+
+
+
+ Eval Review
+
+
+
+
+
+
+
+
+
+
+
Eval Review:
+
Review each output and leave feedback below. Navigate with arrow keys or buttons. When done, copy feedback and paste into Claude Code.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Prompt
+
+
+
+
+
+
+
+
Output
+
+
No output files found
+
+
+
+
+
+
+
+ ▶
+ Previous Output
+
+
+
+
+
+
+
+
+
+ ▶
+ Formal Grades
+
+
+
+
+
+
+
+
Your Feedback
+
+
+
+
+
Previous feedback
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
No benchmark data available. Run a benchmark to see quantitative results here.
+
+
+
+
+
+
+
+
Review Complete
+
Your feedback has been saved. Go back to your Claude Code session and tell Claude you're done reviewing.
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/.claude/skills/skill-creator/references/schemas.md b/.claude/skills/skill-creator/references/schemas.md
new file mode 100644
index 000000000..b6eeaa2d4
--- /dev/null
+++ b/.claude/skills/skill-creator/references/schemas.md
@@ -0,0 +1,430 @@
+# JSON Schemas
+
+This document defines the JSON schemas used by skill-creator.
+
+---
+
+## evals.json
+
+Defines the evals for a skill. Located at `evals/evals.json` within the skill directory.
+
+```json
+{
+ "skill_name": "example-skill",
+ "evals": [
+ {
+ "id": 1,
+ "prompt": "User's example prompt",
+ "expected_output": "Description of expected result",
+ "files": ["evals/files/sample1.pdf"],
+ "expectations": [
+ "The output includes X",
+ "The skill used script Y"
+ ]
+ }
+ ]
+}
+```
+
+**Fields:**
+- `skill_name`: Name matching the skill's frontmatter
+- `evals[].id`: Unique integer identifier
+- `evals[].prompt`: The task to execute
+- `evals[].expected_output`: Human-readable description of success
+- `evals[].files`: Optional list of input file paths (relative to skill root)
+- `evals[].expectations`: List of verifiable statements
+
+---
+
+## history.json
+
+Tracks version progression in Improve mode. Located at workspace root.
+
+```json
+{
+ "started_at": "2026-01-15T10:30:00Z",
+ "skill_name": "pdf",
+ "current_best": "v2",
+ "iterations": [
+ {
+ "version": "v0",
+ "parent": null,
+ "expectation_pass_rate": 0.65,
+ "grading_result": "baseline",
+ "is_current_best": false
+ },
+ {
+ "version": "v1",
+ "parent": "v0",
+ "expectation_pass_rate": 0.75,
+ "grading_result": "won",
+ "is_current_best": false
+ },
+ {
+ "version": "v2",
+ "parent": "v1",
+ "expectation_pass_rate": 0.85,
+ "grading_result": "won",
+ "is_current_best": true
+ }
+ ]
+}
+```
+
+**Fields:**
+- `started_at`: ISO timestamp of when improvement started
+- `skill_name`: Name of the skill being improved
+- `current_best`: Version identifier of the best performer
+- `iterations[].version`: Version identifier (v0, v1, ...)
+- `iterations[].parent`: Parent version this was derived from
+- `iterations[].expectation_pass_rate`: Pass rate from grading
+- `iterations[].grading_result`: "baseline", "won", "lost", or "tie"
+- `iterations[].is_current_best`: Whether this is the current best version
+
+---
+
+## grading.json
+
+Output from the grader agent. Located at `/grading.json`.
+
+```json
+{
+ "expectations": [
+ {
+ "text": "The output includes the name 'John Smith'",
+ "passed": true,
+ "evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'"
+ },
+ {
+ "text": "The spreadsheet has a SUM formula in cell B10",
+ "passed": false,
+ "evidence": "No spreadsheet was created. The output was a text file."
+ }
+ ],
+ "summary": {
+ "passed": 2,
+ "failed": 1,
+ "total": 3,
+ "pass_rate": 0.67
+ },
+ "execution_metrics": {
+ "tool_calls": {
+ "Read": 5,
+ "Write": 2,
+ "Bash": 8
+ },
+ "total_tool_calls": 15,
+ "total_steps": 6,
+ "errors_encountered": 0,
+ "output_chars": 12450,
+ "transcript_chars": 3200
+ },
+ "timing": {
+ "executor_duration_seconds": 165.0,
+ "grader_duration_seconds": 26.0,
+ "total_duration_seconds": 191.0
+ },
+ "claims": [
+ {
+ "claim": "The form has 12 fillable fields",
+ "type": "factual",
+ "verified": true,
+ "evidence": "Counted 12 fields in field_info.json"
+ }
+ ],
+ "user_notes_summary": {
+ "uncertainties": ["Used 2023 data, may be stale"],
+ "needs_review": [],
+ "workarounds": ["Fell back to text overlay for non-fillable fields"]
+ },
+ "eval_feedback": {
+ "suggestions": [
+ {
+ "assertion": "The output includes the name 'John Smith'",
+ "reason": "A hallucinated document that mentions the name would also pass"
+ }
+ ],
+ "overall": "Assertions check presence but not correctness."
+ }
+}
+```
+
+**Fields:**
+- `expectations[]`: Graded expectations with evidence
+- `summary`: Aggregate pass/fail counts
+- `execution_metrics`: Tool usage and output size (from executor's metrics.json)
+- `timing`: Wall clock timing (from timing.json)
+- `claims`: Extracted and verified claims from the output
+- `user_notes_summary`: Issues flagged by the executor
+- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising
+
+---
+
+## metrics.json
+
+Output from the executor agent. Located at `/outputs/metrics.json`.
+
+```json
+{
+ "tool_calls": {
+ "Read": 5,
+ "Write": 2,
+ "Bash": 8,
+ "Edit": 1,
+ "Glob": 2,
+ "Grep": 0
+ },
+ "total_tool_calls": 18,
+ "total_steps": 6,
+ "files_created": ["filled_form.pdf", "field_values.json"],
+ "errors_encountered": 0,
+ "output_chars": 12450,
+ "transcript_chars": 3200
+}
+```
+
+**Fields:**
+- `tool_calls`: Count per tool type
+- `total_tool_calls`: Sum of all tool calls
+- `total_steps`: Number of major execution steps
+- `files_created`: List of output files created
+- `errors_encountered`: Number of errors during execution
+- `output_chars`: Total character count of output files
+- `transcript_chars`: Character count of transcript
+
+---
+
+## timing.json
+
+Wall clock timing for a run. Located at `/timing.json`.
+
+**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact.
+
+```json
+{
+ "total_tokens": 84852,
+ "duration_ms": 23332,
+ "total_duration_seconds": 23.3,
+ "executor_start": "2026-01-15T10:30:00Z",
+ "executor_end": "2026-01-15T10:32:45Z",
+ "executor_duration_seconds": 165.0,
+ "grader_start": "2026-01-15T10:32:46Z",
+ "grader_end": "2026-01-15T10:33:12Z",
+ "grader_duration_seconds": 26.0
+}
+```
+
+---
+
+## benchmark.json
+
+Output from Benchmark mode. Located at `benchmarks//benchmark.json`.
+
+```json
+{
+ "metadata": {
+ "skill_name": "pdf",
+ "skill_path": "/path/to/pdf",
+ "executor_model": "claude-sonnet-4-20250514",
+ "analyzer_model": "most-capable-model",
+ "timestamp": "2026-01-15T10:30:00Z",
+ "evals_run": [1, 2, 3],
+ "runs_per_configuration": 3
+ },
+
+ "runs": [
+ {
+ "eval_id": 1,
+ "eval_name": "Ocean",
+ "configuration": "with_skill",
+ "run_number": 1,
+ "result": {
+ "pass_rate": 0.85,
+ "passed": 6,
+ "failed": 1,
+ "total": 7,
+ "time_seconds": 42.5,
+ "tokens": 3800,
+ "tool_calls": 18,
+ "errors": 0
+ },
+ "expectations": [
+ {"text": "...", "passed": true, "evidence": "..."}
+ ],
+ "notes": [
+ "Used 2023 data, may be stale",
+ "Fell back to text overlay for non-fillable fields"
+ ]
+ }
+ ],
+
+ "run_summary": {
+ "with_skill": {
+ "pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90},
+ "time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0},
+ "tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100}
+ },
+ "without_skill": {
+ "pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45},
+ "time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0},
+ "tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500}
+ },
+ "delta": {
+ "pass_rate": "+0.50",
+ "time_seconds": "+13.0",
+ "tokens": "+1700"
+ }
+ },
+
+ "notes": [
+ "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",
+ "Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent",
+ "Without-skill runs consistently fail on table extraction expectations",
+ "Skill adds 13s average execution time but improves pass rate by 50%"
+ ]
+}
+```
+
+**Fields:**
+- `metadata`: Information about the benchmark run
+ - `skill_name`: Name of the skill
+ - `timestamp`: When the benchmark was run
+ - `evals_run`: List of eval names or IDs
+ - `runs_per_configuration`: Number of runs per config (e.g. 3)
+- `runs[]`: Individual run results
+ - `eval_id`: Numeric eval identifier
+ - `eval_name`: Human-readable eval name (used as section header in the viewer)
+ - `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding)
+ - `run_number`: Integer run number (1, 2, 3...)
+ - `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors`
+- `run_summary`: Statistical aggregates per configuration
+ - `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields
+ - `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"`
+- `notes`: Freeform observations from the analyzer
+
+**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually.
+
+---
+
+## comparison.json
+
+Output from blind comparator. Located at `/comparison-N.json`.
+
+```json
+{
+ "winner": "A",
+ "reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.",
+ "rubric": {
+ "A": {
+ "content": {
+ "correctness": 5,
+ "completeness": 5,
+ "accuracy": 4
+ },
+ "structure": {
+ "organization": 4,
+ "formatting": 5,
+ "usability": 4
+ },
+ "content_score": 4.7,
+ "structure_score": 4.3,
+ "overall_score": 9.0
+ },
+ "B": {
+ "content": {
+ "correctness": 3,
+ "completeness": 2,
+ "accuracy": 3
+ },
+ "structure": {
+ "organization": 3,
+ "formatting": 2,
+ "usability": 3
+ },
+ "content_score": 2.7,
+ "structure_score": 2.7,
+ "overall_score": 5.4
+ }
+ },
+ "output_quality": {
+ "A": {
+ "score": 9,
+ "strengths": ["Complete solution", "Well-formatted", "All fields present"],
+ "weaknesses": ["Minor style inconsistency in header"]
+ },
+ "B": {
+ "score": 5,
+ "strengths": ["Readable output", "Correct basic structure"],
+ "weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"]
+ }
+ },
+ "expectation_results": {
+ "A": {
+ "passed": 4,
+ "total": 5,
+ "pass_rate": 0.80,
+ "details": [
+ {"text": "Output includes name", "passed": true}
+ ]
+ },
+ "B": {
+ "passed": 3,
+ "total": 5,
+ "pass_rate": 0.60,
+ "details": [
+ {"text": "Output includes name", "passed": true}
+ ]
+ }
+ }
+}
+```
+
+---
+
+## analysis.json
+
+Output from post-hoc analyzer. Located at `/analysis.json`.
+
+```json
+{
+ "comparison_summary": {
+ "winner": "A",
+ "winner_skill": "path/to/winner/skill",
+ "loser_skill": "path/to/loser/skill",
+ "comparator_reasoning": "Brief summary of why comparator chose winner"
+ },
+ "winner_strengths": [
+ "Clear step-by-step instructions for handling multi-page documents",
+ "Included validation script that caught formatting errors"
+ ],
+ "loser_weaknesses": [
+ "Vague instruction 'process the document appropriately' led to inconsistent behavior",
+ "No script for validation, agent had to improvise"
+ ],
+ "instruction_following": {
+ "winner": {
+ "score": 9,
+ "issues": ["Minor: skipped optional logging step"]
+ },
+ "loser": {
+ "score": 6,
+ "issues": [
+ "Did not use the skill's formatting template",
+ "Invented own approach instead of following step 3"
+ ]
+ }
+ },
+ "improvement_suggestions": [
+ {
+ "priority": "high",
+ "category": "instructions",
+ "suggestion": "Replace 'process the document appropriately' with explicit steps",
+ "expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"
+ }
+ ],
+ "transcript_insights": {
+ "winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script",
+ "loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods"
+ }
+}
+```
diff --git a/.claude/skills/skill-creator/scripts/__init__.py b/.claude/skills/skill-creator/scripts/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/.claude/skills/skill-creator/scripts/aggregate_benchmark.py b/.claude/skills/skill-creator/scripts/aggregate_benchmark.py
new file mode 100755
index 000000000..3e66e8c10
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/aggregate_benchmark.py
@@ -0,0 +1,401 @@
+#!/usr/bin/env python3
+"""
+Aggregate individual run results into benchmark summary statistics.
+
+Reads grading.json files from run directories and produces:
+- run_summary with mean, stddev, min, max for each metric
+- delta between with_skill and without_skill configurations
+
+Usage:
+ python aggregate_benchmark.py
+
+Example:
+ python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/
+
+The script supports two directory layouts:
+
+ Workspace layout (from skill-creator iterations):
+ /
+ └── eval-N/
+ ├── with_skill/
+ │ ├── run-1/grading.json
+ │ └── run-2/grading.json
+ └── without_skill/
+ ├── run-1/grading.json
+ └── run-2/grading.json
+
+ Legacy layout (with runs/ subdirectory):
+ /
+ └── runs/
+ └── eval-N/
+ ├── with_skill/
+ │ └── run-1/grading.json
+ └── without_skill/
+ └── run-1/grading.json
+"""
+
+import argparse
+import json
+import math
+import sys
+from datetime import datetime, timezone
+from pathlib import Path
+
+
+def calculate_stats(values: list[float]) -> dict:
+ """Calculate mean, stddev, min, max for a list of values."""
+ if not values:
+ return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}
+
+ n = len(values)
+ mean = sum(values) / n
+
+ if n > 1:
+ variance = sum((x - mean) ** 2 for x in values) / (n - 1)
+ stddev = math.sqrt(variance)
+ else:
+ stddev = 0.0
+
+ return {
+ "mean": round(mean, 4),
+ "stddev": round(stddev, 4),
+ "min": round(min(values), 4),
+ "max": round(max(values), 4)
+ }
+
+
+def load_run_results(benchmark_dir: Path) -> dict:
+ """
+ Load all run results from a benchmark directory.
+
+ Returns dict keyed by config name (e.g. "with_skill"/"without_skill",
+ or "new_skill"/"old_skill"), each containing a list of run results.
+ """
+ # Support both layouts: eval dirs directly under benchmark_dir, or under runs/
+ runs_dir = benchmark_dir / "runs"
+ if runs_dir.exists():
+ search_dir = runs_dir
+ elif list(benchmark_dir.glob("eval-*")):
+ search_dir = benchmark_dir
+ else:
+ print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}")
+ return {}
+
+ results: dict[str, list] = {}
+
+ for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))):
+ metadata_path = eval_dir / "eval_metadata.json"
+ if metadata_path.exists():
+ try:
+ with open(metadata_path) as mf:
+ eval_id = json.load(mf).get("eval_id", eval_idx)
+ except (json.JSONDecodeError, OSError):
+ eval_id = eval_idx
+ else:
+ try:
+ eval_id = int(eval_dir.name.split("-")[1])
+ except ValueError:
+ eval_id = eval_idx
+
+ # Discover config directories dynamically rather than hardcoding names
+ for config_dir in sorted(eval_dir.iterdir()):
+ if not config_dir.is_dir():
+ continue
+ # Skip non-config directories (inputs, outputs, etc.)
+ if not list(config_dir.glob("run-*")):
+ continue
+ config = config_dir.name
+ if config not in results:
+ results[config] = []
+
+ for run_dir in sorted(config_dir.glob("run-*")):
+ run_number = int(run_dir.name.split("-")[1])
+ grading_file = run_dir / "grading.json"
+
+ if not grading_file.exists():
+ print(f"Warning: grading.json not found in {run_dir}")
+ continue
+
+ try:
+ with open(grading_file) as f:
+ grading = json.load(f)
+ except json.JSONDecodeError as e:
+ print(f"Warning: Invalid JSON in {grading_file}: {e}")
+ continue
+
+ # Extract metrics
+ result = {
+ "eval_id": eval_id,
+ "run_number": run_number,
+ "pass_rate": grading.get("summary", {}).get("pass_rate", 0.0),
+ "passed": grading.get("summary", {}).get("passed", 0),
+ "failed": grading.get("summary", {}).get("failed", 0),
+ "total": grading.get("summary", {}).get("total", 0),
+ }
+
+ # Extract timing — check grading.json first, then sibling timing.json
+ timing = grading.get("timing", {})
+ result["time_seconds"] = timing.get("total_duration_seconds", 0.0)
+ timing_file = run_dir / "timing.json"
+ if result["time_seconds"] == 0.0 and timing_file.exists():
+ try:
+ with open(timing_file) as tf:
+ timing_data = json.load(tf)
+ result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0)
+ result["tokens"] = timing_data.get("total_tokens", 0)
+ except json.JSONDecodeError:
+ pass
+
+ # Extract metrics if available
+ metrics = grading.get("execution_metrics", {})
+ result["tool_calls"] = metrics.get("total_tool_calls", 0)
+ if not result.get("tokens"):
+ result["tokens"] = metrics.get("output_chars", 0)
+ result["errors"] = metrics.get("errors_encountered", 0)
+
+ # Extract expectations — viewer requires fields: text, passed, evidence
+ raw_expectations = grading.get("expectations", [])
+ for exp in raw_expectations:
+ if "text" not in exp or "passed" not in exp:
+ print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}")
+ result["expectations"] = raw_expectations
+
+ # Extract notes from user_notes_summary
+ notes_summary = grading.get("user_notes_summary", {})
+ notes = []
+ notes.extend(notes_summary.get("uncertainties", []))
+ notes.extend(notes_summary.get("needs_review", []))
+ notes.extend(notes_summary.get("workarounds", []))
+ result["notes"] = notes
+
+ results[config].append(result)
+
+ return results
+
+
+def aggregate_results(results: dict) -> dict:
+ """
+ Aggregate run results into summary statistics.
+
+ Returns run_summary with stats for each configuration and delta.
+ """
+ run_summary = {}
+ configs = list(results.keys())
+
+ for config in configs:
+ runs = results.get(config, [])
+
+ if not runs:
+ run_summary[config] = {
+ "pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
+ "time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0},
+ "tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0}
+ }
+ continue
+
+ pass_rates = [r["pass_rate"] for r in runs]
+ times = [r["time_seconds"] for r in runs]
+ tokens = [r.get("tokens", 0) for r in runs]
+
+ run_summary[config] = {
+ "pass_rate": calculate_stats(pass_rates),
+ "time_seconds": calculate_stats(times),
+ "tokens": calculate_stats(tokens)
+ }
+
+ # Calculate delta between the first two configs (if two exist)
+ if len(configs) >= 2:
+ primary = run_summary.get(configs[0], {})
+ baseline = run_summary.get(configs[1], {})
+ else:
+ primary = run_summary.get(configs[0], {}) if configs else {}
+ baseline = {}
+
+ delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0)
+ delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0)
+ delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0)
+
+ run_summary["delta"] = {
+ "pass_rate": f"{delta_pass_rate:+.2f}",
+ "time_seconds": f"{delta_time:+.1f}",
+ "tokens": f"{delta_tokens:+.0f}"
+ }
+
+ return run_summary
+
+
+def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict:
+ """
+ Generate complete benchmark.json from run results.
+ """
+ results = load_run_results(benchmark_dir)
+ run_summary = aggregate_results(results)
+
+ # Build runs array for benchmark.json
+ runs = []
+ for config in results:
+ for result in results[config]:
+ runs.append({
+ "eval_id": result["eval_id"],
+ "configuration": config,
+ "run_number": result["run_number"],
+ "result": {
+ "pass_rate": result["pass_rate"],
+ "passed": result["passed"],
+ "failed": result["failed"],
+ "total": result["total"],
+ "time_seconds": result["time_seconds"],
+ "tokens": result.get("tokens", 0),
+ "tool_calls": result.get("tool_calls", 0),
+ "errors": result.get("errors", 0)
+ },
+ "expectations": result["expectations"],
+ "notes": result["notes"]
+ })
+
+ # Determine eval IDs from results
+ eval_ids = sorted(set(
+ r["eval_id"]
+ for config in results.values()
+ for r in config
+ ))
+
+ benchmark = {
+ "metadata": {
+ "skill_name": skill_name or "",
+ "skill_path": skill_path or "",
+ "executor_model": "",
+ "analyzer_model": "",
+ "timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"),
+ "evals_run": eval_ids,
+ "runs_per_configuration": 3
+ },
+ "runs": runs,
+ "run_summary": run_summary,
+ "notes": [] # To be filled by analyzer
+ }
+
+ return benchmark
+
+
+def generate_markdown(benchmark: dict) -> str:
+ """Generate human-readable benchmark.md from benchmark data."""
+ metadata = benchmark["metadata"]
+ run_summary = benchmark["run_summary"]
+
+ # Determine config names (excluding "delta")
+ configs = [k for k in run_summary if k != "delta"]
+ config_a = configs[0] if len(configs) >= 1 else "config_a"
+ config_b = configs[1] if len(configs) >= 2 else "config_b"
+ label_a = config_a.replace("_", " ").title()
+ label_b = config_b.replace("_", " ").title()
+
+ lines = [
+ f"# Skill Benchmark: {metadata['skill_name']}",
+ "",
+ f"**Model**: {metadata['executor_model']}",
+ f"**Date**: {metadata['timestamp']}",
+ f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)",
+ "",
+ "## Summary",
+ "",
+ f"| Metric | {label_a} | {label_b} | Delta |",
+ "|--------|------------|---------------|-------|",
+ ]
+
+ a_summary = run_summary.get(config_a, {})
+ b_summary = run_summary.get(config_b, {})
+ delta = run_summary.get("delta", {})
+
+ # Format pass rate
+ a_pr = a_summary.get("pass_rate", {})
+ b_pr = b_summary.get("pass_rate", {})
+ lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '—')} |")
+
+ # Format time
+ a_time = a_summary.get("time_seconds", {})
+ b_time = b_summary.get("time_seconds", {})
+ lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '—')}s |")
+
+ # Format tokens
+ a_tokens = a_summary.get("tokens", {})
+ b_tokens = b_summary.get("tokens", {})
+ lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '—')} |")
+
+ # Notes section
+ if benchmark.get("notes"):
+ lines.extend([
+ "",
+ "## Notes",
+ ""
+ ])
+ for note in benchmark["notes"]:
+ lines.append(f"- {note}")
+
+ return "\n".join(lines)
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Aggregate benchmark run results into summary statistics"
+ )
+ parser.add_argument(
+ "benchmark_dir",
+ type=Path,
+ help="Path to the benchmark directory"
+ )
+ parser.add_argument(
+ "--skill-name",
+ default="",
+ help="Name of the skill being benchmarked"
+ )
+ parser.add_argument(
+ "--skill-path",
+ default="",
+ help="Path to the skill being benchmarked"
+ )
+ parser.add_argument(
+ "--output", "-o",
+ type=Path,
+ help="Output path for benchmark.json (default: /benchmark.json)"
+ )
+
+ args = parser.parse_args()
+
+ if not args.benchmark_dir.exists():
+ print(f"Directory not found: {args.benchmark_dir}")
+ sys.exit(1)
+
+ # Generate benchmark
+ benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path)
+
+ # Determine output paths
+ output_json = args.output or (args.benchmark_dir / "benchmark.json")
+ output_md = output_json.with_suffix(".md")
+
+ # Write benchmark.json
+ with open(output_json, "w") as f:
+ json.dump(benchmark, f, indent=2)
+ print(f"Generated: {output_json}")
+
+ # Write benchmark.md
+ markdown = generate_markdown(benchmark)
+ with open(output_md, "w") as f:
+ f.write(markdown)
+ print(f"Generated: {output_md}")
+
+ # Print summary
+ run_summary = benchmark["run_summary"]
+ configs = [k for k in run_summary if k != "delta"]
+ delta = run_summary.get("delta", {})
+
+ print(f"\nSummary:")
+ for config in configs:
+ pr = run_summary[config]["pass_rate"]["mean"]
+ label = config.replace("_", " ").title()
+ print(f" {label}: {pr*100:.1f}% pass rate")
+ print(f" Delta: {delta.get('pass_rate', '—')}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/generate_report.py b/.claude/skills/skill-creator/scripts/generate_report.py
new file mode 100755
index 000000000..959e30a00
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/generate_report.py
@@ -0,0 +1,326 @@
+#!/usr/bin/env python3
+"""Generate an HTML report from run_loop.py output.
+
+Takes the JSON output from run_loop.py and generates a visual HTML report
+showing each description attempt with check/x for each test case.
+Distinguishes between train and test queries.
+"""
+
+import argparse
+import html
+import json
+import sys
+from pathlib import Path
+
+
+def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str:
+ """Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag."""
+ history = data.get("history", [])
+ holdout = data.get("holdout", 0)
+ title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else ""
+
+ # Get all unique queries from train and test sets, with should_trigger info
+ train_queries: list[dict] = []
+ test_queries: list[dict] = []
+ if history:
+ for r in history[0].get("train_results", history[0].get("results", [])):
+ train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
+ if history[0].get("test_results"):
+ for r in history[0].get("test_results", []):
+ test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)})
+
+ refresh_tag = ' \n' if auto_refresh else ""
+
+ html_parts = ["""
+
+
+
+""" + refresh_tag + """ """ + title_prefix + """Skill Description Optimization
+
+
+
+
+
+
+
+ Optimizing your skill's description. This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill.
+
+""")
+
+ # Add column headers for train queries
+ for qinfo in train_queries:
+ polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
+ html_parts.append(f'
{html.escape(qinfo["query"])}
\n')
+
+ # Add column headers for test queries (different color)
+ for qinfo in test_queries:
+ polarity = "positive-col" if qinfo["should_trigger"] else "negative-col"
+ html_parts.append(f'
{html.escape(qinfo["query"])}
\n')
+
+ html_parts.append("""
+
+
+""")
+
+ # Find best iteration for highlighting
+ if test_queries:
+ best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration")
+ else:
+ best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration")
+
+ # Add rows for each iteration
+ for h in history:
+ iteration = h.get("iteration", "?")
+ train_passed = h.get("train_passed", h.get("passed", 0))
+ train_total = h.get("train_total", h.get("total", 0))
+ test_passed = h.get("test_passed")
+ test_total = h.get("test_total")
+ description = h.get("description", "")
+ train_results = h.get("train_results", h.get("results", []))
+ test_results = h.get("test_results", [])
+
+ # Create lookups for results by query
+ train_by_query = {r["query"]: r for r in train_results}
+ test_by_query = {r["query"]: r for r in test_results} if test_results else {}
+
+ # Compute aggregate correct/total runs across all retries
+ def aggregate_runs(results: list[dict]) -> tuple[int, int]:
+ correct = 0
+ total = 0
+ for r in results:
+ runs = r.get("runs", 0)
+ triggers = r.get("triggers", 0)
+ total += runs
+ if r.get("should_trigger", True):
+ correct += triggers
+ else:
+ correct += runs - triggers
+ return correct, total
+
+ train_correct, train_runs = aggregate_runs(train_results)
+ test_correct, test_runs = aggregate_runs(test_results)
+
+ # Determine score classes
+ def score_class(correct: int, total: int) -> str:
+ if total > 0:
+ ratio = correct / total
+ if ratio >= 0.8:
+ return "score-good"
+ elif ratio >= 0.5:
+ return "score-ok"
+ return "score-bad"
+
+ train_class = score_class(train_correct, train_runs)
+ test_class = score_class(test_correct, test_runs)
+
+ row_class = "best-row" if iteration == best_iter else ""
+
+ html_parts.append(f"""
+
{iteration}
+
{train_correct}/{train_runs}
+
{test_correct}/{test_runs}
+
{html.escape(description)}
+""")
+
+ # Add result for each train query
+ for qinfo in train_queries:
+ r = train_by_query.get(qinfo["query"], {})
+ did_pass = r.get("pass", False)
+ triggers = r.get("triggers", 0)
+ runs = r.get("runs", 0)
+
+ icon = "✓" if did_pass else "✗"
+ css_class = "pass" if did_pass else "fail"
+
+ html_parts.append(f'
{icon}{triggers}/{runs}
\n')
+
+ # Add result for each test query (with different background)
+ for qinfo in test_queries:
+ r = test_by_query.get(qinfo["query"], {})
+ did_pass = r.get("pass", False)
+ triggers = r.get("triggers", 0)
+ runs = r.get("runs", 0)
+
+ icon = "✓" if did_pass else "✗"
+ css_class = "pass" if did_pass else "fail"
+
+ html_parts.append(f'
{icon}{triggers}/{runs}
\n')
+
+ html_parts.append("
\n")
+
+ html_parts.append("""
+
+
+""")
+
+ html_parts.append("""
+
+
+""")
+
+ return "".join(html_parts)
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output")
+ parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)")
+ parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)")
+ parser.add_argument("--skill-name", default="", help="Skill name to include in the report title")
+ args = parser.parse_args()
+
+ if args.input == "-":
+ data = json.load(sys.stdin)
+ else:
+ data = json.loads(Path(args.input).read_text())
+
+ html_output = generate_html(data, skill_name=args.skill_name)
+
+ if args.output:
+ Path(args.output).write_text(html_output)
+ print(f"Report written to {args.output}", file=sys.stderr)
+ else:
+ print(html_output)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/improve_description.py b/.claude/skills/skill-creator/scripts/improve_description.py
new file mode 100755
index 000000000..06bcec761
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/improve_description.py
@@ -0,0 +1,247 @@
+#!/usr/bin/env python3
+"""Improve a skill description based on eval results.
+
+Takes eval results (from run_eval.py) and generates an improved description
+by calling `claude -p` as a subprocess (same auth pattern as run_eval.py —
+uses the session's Claude Code auth, no separate ANTHROPIC_API_KEY needed).
+"""
+
+import argparse
+import json
+import os
+import re
+import subprocess
+import sys
+from pathlib import Path
+
+from scripts.utils import parse_skill_md
+
+
+def _call_claude(prompt: str, model: str | None, timeout: int = 300) -> str:
+ """Run `claude -p` with the prompt on stdin and return the text response.
+
+ Prompt goes over stdin (not argv) because it embeds the full SKILL.md
+ body and can easily exceed comfortable argv length.
+ """
+ cmd = ["claude", "-p", "--output-format", "text"]
+ if model:
+ cmd.extend(["--model", model])
+
+ # Remove CLAUDECODE env var to allow nesting claude -p inside a
+ # Claude Code session. The guard is for interactive terminal conflicts;
+ # programmatic subprocess usage is safe. Same pattern as run_eval.py.
+ env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
+
+ result = subprocess.run(
+ cmd,
+ input=prompt,
+ capture_output=True,
+ text=True,
+ env=env,
+ timeout=timeout,
+ )
+ if result.returncode != 0:
+ raise RuntimeError(
+ f"claude -p exited {result.returncode}\nstderr: {result.stderr}"
+ )
+ return result.stdout
+
+
+def improve_description(
+ skill_name: str,
+ skill_content: str,
+ current_description: str,
+ eval_results: dict,
+ history: list[dict],
+ model: str,
+ test_results: dict | None = None,
+ log_dir: Path | None = None,
+ iteration: int | None = None,
+) -> str:
+ """Call Claude to improve the description based on eval results."""
+ failed_triggers = [
+ r for r in eval_results["results"]
+ if r["should_trigger"] and not r["pass"]
+ ]
+ false_triggers = [
+ r for r in eval_results["results"]
+ if not r["should_trigger"] and not r["pass"]
+ ]
+
+ # Build scores summary
+ train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}"
+ if test_results:
+ test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}"
+ scores_summary = f"Train: {train_score}, Test: {test_score}"
+ else:
+ scores_summary = f"Train: {train_score}"
+
+ prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples.
+
+The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones.
+
+Here's the current description:
+
+"{current_description}"
+
+
+Current scores ({scores_summary}):
+
+"""
+ if failed_triggers:
+ prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n"
+ for r in failed_triggers:
+ prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
+ prompt += "\n"
+
+ if false_triggers:
+ prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n"
+ for r in false_triggers:
+ prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n'
+ prompt += "\n"
+
+ if history:
+ prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n"
+ for h in history:
+ train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}"
+ test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None
+ score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "")
+ prompt += f'\n'
+ prompt += f'Description: "{h["description"]}"\n'
+ if "results" in h:
+ prompt += "Train results:\n"
+ for r in h["results"]:
+ status = "PASS" if r["pass"] else "FAIL"
+ prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n'
+ if h.get("note"):
+ prompt += f'Note: {h["note"]}\n'
+ prompt += "\n\n"
+
+ prompt += f"""
+
+Skill content (for context on what the skill does):
+
+{skill_content}
+
+
+Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold:
+
+1. Avoid overfitting
+2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description.
+
+Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. There is a hard limit of 1024 characters — descriptions over that will be truncated, so stay comfortably under it.
+
+Here are some tips that we've found to work well in writing these descriptions:
+- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does"
+- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works.
+- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable.
+- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings.
+
+I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end.
+
+Please respond with only the new description text in tags, nothing else."""
+
+ text = _call_claude(prompt, model)
+
+ match = re.search(r"(.*?)", text, re.DOTALL)
+ description = match.group(1).strip().strip('"') if match else text.strip().strip('"')
+
+ transcript: dict = {
+ "iteration": iteration,
+ "prompt": prompt,
+ "response": text,
+ "parsed_description": description,
+ "char_count": len(description),
+ "over_limit": len(description) > 1024,
+ }
+
+ # Safety net: the prompt already states the 1024-char hard limit, but if
+ # the model blew past it anyway, make one fresh single-turn call that
+ # quotes the too-long version and asks for a shorter rewrite. (The old
+ # SDK path did this as a true multi-turn; `claude -p` is one-shot, so we
+ # inline the prior output into the new prompt instead.)
+ if len(description) > 1024:
+ shorten_prompt = (
+ f"{prompt}\n\n"
+ f"---\n\n"
+ f"A previous attempt produced this description, which at "
+ f"{len(description)} characters is over the 1024-character hard limit:\n\n"
+ f'"{description}"\n\n'
+ f"Rewrite it to be under 1024 characters while keeping the most "
+ f"important trigger words and intent coverage. Respond with only "
+ f"the new description in tags."
+ )
+ shorten_text = _call_claude(shorten_prompt, model)
+ match = re.search(r"(.*?)", shorten_text, re.DOTALL)
+ shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"')
+
+ transcript["rewrite_prompt"] = shorten_prompt
+ transcript["rewrite_response"] = shorten_text
+ transcript["rewrite_description"] = shortened
+ transcript["rewrite_char_count"] = len(shortened)
+ description = shortened
+
+ transcript["final_description"] = description
+
+ if log_dir:
+ log_dir.mkdir(parents=True, exist_ok=True)
+ log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json"
+ log_file.write_text(json.dumps(transcript, indent=2))
+
+ return description
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Improve a skill description based on eval results")
+ parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)")
+ parser.add_argument("--skill-path", required=True, help="Path to skill directory")
+ parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)")
+ parser.add_argument("--model", required=True, help="Model for improvement")
+ parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr")
+ args = parser.parse_args()
+
+ skill_path = Path(args.skill_path)
+ if not (skill_path / "SKILL.md").exists():
+ print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
+ sys.exit(1)
+
+ eval_results = json.loads(Path(args.eval_results).read_text())
+ history = []
+ if args.history:
+ history = json.loads(Path(args.history).read_text())
+
+ name, _, content = parse_skill_md(skill_path)
+ current_description = eval_results["description"]
+
+ if args.verbose:
+ print(f"Current: {current_description}", file=sys.stderr)
+ print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr)
+
+ new_description = improve_description(
+ skill_name=name,
+ skill_content=content,
+ current_description=current_description,
+ eval_results=eval_results,
+ history=history,
+ model=args.model,
+ )
+
+ if args.verbose:
+ print(f"Improved: {new_description}", file=sys.stderr)
+
+ # Output as JSON with both the new description and updated history
+ output = {
+ "description": new_description,
+ "history": history + [{
+ "description": current_description,
+ "passed": eval_results["summary"]["passed"],
+ "failed": eval_results["summary"]["failed"],
+ "total": eval_results["summary"]["total"],
+ "results": eval_results["results"],
+ }],
+ }
+ print(json.dumps(output, indent=2))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/package_skill.py b/.claude/skills/skill-creator/scripts/package_skill.py
new file mode 100755
index 000000000..f48eac444
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/package_skill.py
@@ -0,0 +1,136 @@
+#!/usr/bin/env python3
+"""
+Skill Packager - Creates a distributable .skill file of a skill folder
+
+Usage:
+ python utils/package_skill.py [output-directory]
+
+Example:
+ python utils/package_skill.py skills/public/my-skill
+ python utils/package_skill.py skills/public/my-skill ./dist
+"""
+
+import fnmatch
+import sys
+import zipfile
+from pathlib import Path
+from scripts.quick_validate import validate_skill
+
+# Patterns to exclude when packaging skills.
+EXCLUDE_DIRS = {"__pycache__", "node_modules"}
+EXCLUDE_GLOBS = {"*.pyc"}
+EXCLUDE_FILES = {".DS_Store"}
+# Directories excluded only at the skill root (not when nested deeper).
+ROOT_EXCLUDE_DIRS = {"evals"}
+
+
+def should_exclude(rel_path: Path) -> bool:
+ """Check if a path should be excluded from packaging."""
+ parts = rel_path.parts
+ if any(part in EXCLUDE_DIRS for part in parts):
+ return True
+ # rel_path is relative to skill_path.parent, so parts[0] is the skill
+ # folder name and parts[1] (if present) is the first subdir.
+ if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS:
+ return True
+ name = rel_path.name
+ if name in EXCLUDE_FILES:
+ return True
+ return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS)
+
+
+def package_skill(skill_path, output_dir=None):
+ """
+ Package a skill folder into a .skill file.
+
+ Args:
+ skill_path: Path to the skill folder
+ output_dir: Optional output directory for the .skill file (defaults to current directory)
+
+ Returns:
+ Path to the created .skill file, or None if error
+ """
+ skill_path = Path(skill_path).resolve()
+
+ # Validate skill folder exists
+ if not skill_path.exists():
+ print(f"❌ Error: Skill folder not found: {skill_path}")
+ return None
+
+ if not skill_path.is_dir():
+ print(f"❌ Error: Path is not a directory: {skill_path}")
+ return None
+
+ # Validate SKILL.md exists
+ skill_md = skill_path / "SKILL.md"
+ if not skill_md.exists():
+ print(f"❌ Error: SKILL.md not found in {skill_path}")
+ return None
+
+ # Run validation before packaging
+ print("🔍 Validating skill...")
+ valid, message = validate_skill(skill_path)
+ if not valid:
+ print(f"❌ Validation failed: {message}")
+ print(" Please fix the validation errors before packaging.")
+ return None
+ print(f"✅ {message}\n")
+
+ # Determine output location
+ skill_name = skill_path.name
+ if output_dir:
+ output_path = Path(output_dir).resolve()
+ output_path.mkdir(parents=True, exist_ok=True)
+ else:
+ output_path = Path.cwd()
+
+ skill_filename = output_path / f"{skill_name}.skill"
+
+ # Create the .skill file (zip format)
+ try:
+ with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ # Walk through the skill directory, excluding build artifacts
+ for file_path in skill_path.rglob('*'):
+ if not file_path.is_file():
+ continue
+ arcname = file_path.relative_to(skill_path.parent)
+ if should_exclude(arcname):
+ print(f" Skipped: {arcname}")
+ continue
+ zipf.write(file_path, arcname)
+ print(f" Added: {arcname}")
+
+ print(f"\n✅ Successfully packaged skill to: {skill_filename}")
+ return skill_filename
+
+ except Exception as e:
+ print(f"❌ Error creating .skill file: {e}")
+ return None
+
+
+def main():
+ if len(sys.argv) < 2:
+ print("Usage: python utils/package_skill.py [output-directory]")
+ print("\nExample:")
+ print(" python utils/package_skill.py skills/public/my-skill")
+ print(" python utils/package_skill.py skills/public/my-skill ./dist")
+ sys.exit(1)
+
+ skill_path = sys.argv[1]
+ output_dir = sys.argv[2] if len(sys.argv) > 2 else None
+
+ print(f"📦 Packaging skill: {skill_path}")
+ if output_dir:
+ print(f" Output directory: {output_dir}")
+ print()
+
+ result = package_skill(skill_path, output_dir)
+
+ if result:
+ sys.exit(0)
+ else:
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/quick_validate.py b/.claude/skills/skill-creator/scripts/quick_validate.py
new file mode 100755
index 000000000..ed8e1dddc
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/quick_validate.py
@@ -0,0 +1,103 @@
+#!/usr/bin/env python3
+"""
+Quick validation script for skills - minimal version
+"""
+
+import sys
+import os
+import re
+import yaml
+from pathlib import Path
+
+def validate_skill(skill_path):
+ """Basic validation of a skill"""
+ skill_path = Path(skill_path)
+
+ # Check SKILL.md exists
+ skill_md = skill_path / 'SKILL.md'
+ if not skill_md.exists():
+ return False, "SKILL.md not found"
+
+ # Read and validate frontmatter
+ content = skill_md.read_text()
+ if not content.startswith('---'):
+ return False, "No YAML frontmatter found"
+
+ # Extract frontmatter
+ match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
+ if not match:
+ return False, "Invalid frontmatter format"
+
+ frontmatter_text = match.group(1)
+
+ # Parse YAML frontmatter
+ try:
+ frontmatter = yaml.safe_load(frontmatter_text)
+ if not isinstance(frontmatter, dict):
+ return False, "Frontmatter must be a YAML dictionary"
+ except yaml.YAMLError as e:
+ return False, f"Invalid YAML in frontmatter: {e}"
+
+ # Define allowed properties
+ ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
+
+ # Check for unexpected properties (excluding nested keys under metadata)
+ unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
+ if unexpected_keys:
+ return False, (
+ f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
+ f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
+ )
+
+ # Check required fields
+ if 'name' not in frontmatter:
+ return False, "Missing 'name' in frontmatter"
+ if 'description' not in frontmatter:
+ return False, "Missing 'description' in frontmatter"
+
+ # Extract name for validation
+ name = frontmatter.get('name', '')
+ if not isinstance(name, str):
+ return False, f"Name must be a string, got {type(name).__name__}"
+ name = name.strip()
+ if name:
+ # Check naming convention (kebab-case: lowercase with hyphens)
+ if not re.match(r'^[a-z0-9-]+$', name):
+ return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
+ if name.startswith('-') or name.endswith('-') or '--' in name:
+ return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
+ # Check name length (max 64 characters per spec)
+ if len(name) > 64:
+ return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
+
+ # Extract and validate description
+ description = frontmatter.get('description', '')
+ if not isinstance(description, str):
+ return False, f"Description must be a string, got {type(description).__name__}"
+ description = description.strip()
+ if description:
+ # Check for angle brackets
+ if '<' in description or '>' in description:
+ return False, "Description cannot contain angle brackets (< or >)"
+ # Check description length (max 1024 characters per spec)
+ if len(description) > 1024:
+ return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
+
+ # Validate compatibility field if present (optional)
+ compatibility = frontmatter.get('compatibility', '')
+ if compatibility:
+ if not isinstance(compatibility, str):
+ return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
+ if len(compatibility) > 500:
+ return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
+
+ return True, "Skill is valid!"
+
+if __name__ == "__main__":
+ if len(sys.argv) != 2:
+ print("Usage: python quick_validate.py ")
+ sys.exit(1)
+
+ valid, message = validate_skill(sys.argv[1])
+ print(message)
+ sys.exit(0 if valid else 1)
\ No newline at end of file
diff --git a/.claude/skills/skill-creator/scripts/run_eval.py b/.claude/skills/skill-creator/scripts/run_eval.py
new file mode 100755
index 000000000..e58c70bea
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/run_eval.py
@@ -0,0 +1,310 @@
+#!/usr/bin/env python3
+"""Run trigger evaluation for a skill description.
+
+Tests whether a skill's description causes Claude to trigger (read the skill)
+for a set of queries. Outputs results as JSON.
+"""
+
+import argparse
+import json
+import os
+import select
+import subprocess
+import sys
+import time
+import uuid
+from concurrent.futures import ProcessPoolExecutor, as_completed
+from pathlib import Path
+
+from scripts.utils import parse_skill_md
+
+
+def find_project_root() -> Path:
+ """Find the project root by walking up from cwd looking for .claude/.
+
+ Mimics how Claude Code discovers its project root, so the command file
+ we create ends up where claude -p will look for it.
+ """
+ current = Path.cwd()
+ for parent in [current, *current.parents]:
+ if (parent / ".claude").is_dir():
+ return parent
+ return current
+
+
+def run_single_query(
+ query: str,
+ skill_name: str,
+ skill_description: str,
+ timeout: int,
+ project_root: str,
+ model: str | None = None,
+) -> bool:
+ """Run a single query and return whether the skill was triggered.
+
+ Creates a command file in .claude/commands/ so it appears in Claude's
+ available_skills list, then runs `claude -p` with the raw query.
+ Uses --include-partial-messages to detect triggering early from
+ stream events (content_block_start) rather than waiting for the
+ full assistant message, which only arrives after tool execution.
+ """
+ unique_id = uuid.uuid4().hex[:8]
+ clean_name = f"{skill_name}-skill-{unique_id}"
+ project_commands_dir = Path(project_root) / ".claude" / "commands"
+ command_file = project_commands_dir / f"{clean_name}.md"
+
+ try:
+ project_commands_dir.mkdir(parents=True, exist_ok=True)
+ # Use YAML block scalar to avoid breaking on quotes in description
+ indented_desc = "\n ".join(skill_description.split("\n"))
+ command_content = (
+ f"---\n"
+ f"description: |\n"
+ f" {indented_desc}\n"
+ f"---\n\n"
+ f"# {skill_name}\n\n"
+ f"This skill handles: {skill_description}\n"
+ )
+ command_file.write_text(command_content)
+
+ cmd = [
+ "claude",
+ "-p", query,
+ "--output-format", "stream-json",
+ "--verbose",
+ "--include-partial-messages",
+ ]
+ if model:
+ cmd.extend(["--model", model])
+
+ # Remove CLAUDECODE env var to allow nesting claude -p inside a
+ # Claude Code session. The guard is for interactive terminal conflicts;
+ # programmatic subprocess usage is safe.
+ env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"}
+
+ process = subprocess.Popen(
+ cmd,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.DEVNULL,
+ cwd=project_root,
+ env=env,
+ )
+
+ triggered = False
+ start_time = time.time()
+ buffer = ""
+ # Track state for stream event detection
+ pending_tool_name = None
+ accumulated_json = ""
+
+ try:
+ while time.time() - start_time < timeout:
+ if process.poll() is not None:
+ remaining = process.stdout.read()
+ if remaining:
+ buffer += remaining.decode("utf-8", errors="replace")
+ break
+
+ ready, _, _ = select.select([process.stdout], [], [], 1.0)
+ if not ready:
+ continue
+
+ chunk = os.read(process.stdout.fileno(), 8192)
+ if not chunk:
+ break
+ buffer += chunk.decode("utf-8", errors="replace")
+
+ while "\n" in buffer:
+ line, buffer = buffer.split("\n", 1)
+ line = line.strip()
+ if not line:
+ continue
+
+ try:
+ event = json.loads(line)
+ except json.JSONDecodeError:
+ continue
+
+ # Early detection via stream events
+ if event.get("type") == "stream_event":
+ se = event.get("event", {})
+ se_type = se.get("type", "")
+
+ if se_type == "content_block_start":
+ cb = se.get("content_block", {})
+ if cb.get("type") == "tool_use":
+ tool_name = cb.get("name", "")
+ if tool_name in ("Skill", "Read"):
+ pending_tool_name = tool_name
+ accumulated_json = ""
+ else:
+ return False
+
+ elif se_type == "content_block_delta" and pending_tool_name:
+ delta = se.get("delta", {})
+ if delta.get("type") == "input_json_delta":
+ accumulated_json += delta.get("partial_json", "")
+ if clean_name in accumulated_json:
+ return True
+
+ elif se_type in ("content_block_stop", "message_stop"):
+ if pending_tool_name:
+ return clean_name in accumulated_json
+ if se_type == "message_stop":
+ return False
+
+ # Fallback: full assistant message
+ elif event.get("type") == "assistant":
+ message = event.get("message", {})
+ for content_item in message.get("content", []):
+ if content_item.get("type") != "tool_use":
+ continue
+ tool_name = content_item.get("name", "")
+ tool_input = content_item.get("input", {})
+ if tool_name == "Skill" and clean_name in tool_input.get("skill", ""):
+ triggered = True
+ elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""):
+ triggered = True
+ return triggered
+
+ elif event.get("type") == "result":
+ return triggered
+ finally:
+ # Clean up process on any exit path (return, exception, timeout)
+ if process.poll() is None:
+ process.kill()
+ process.wait()
+
+ return triggered
+ finally:
+ if command_file.exists():
+ command_file.unlink()
+
+
+def run_eval(
+ eval_set: list[dict],
+ skill_name: str,
+ description: str,
+ num_workers: int,
+ timeout: int,
+ project_root: Path,
+ runs_per_query: int = 1,
+ trigger_threshold: float = 0.5,
+ model: str | None = None,
+) -> dict:
+ """Run the full eval set and return results."""
+ results = []
+
+ with ProcessPoolExecutor(max_workers=num_workers) as executor:
+ future_to_info = {}
+ for item in eval_set:
+ for run_idx in range(runs_per_query):
+ future = executor.submit(
+ run_single_query,
+ item["query"],
+ skill_name,
+ description,
+ timeout,
+ str(project_root),
+ model,
+ )
+ future_to_info[future] = (item, run_idx)
+
+ query_triggers: dict[str, list[bool]] = {}
+ query_items: dict[str, dict] = {}
+ for future in as_completed(future_to_info):
+ item, _ = future_to_info[future]
+ query = item["query"]
+ query_items[query] = item
+ if query not in query_triggers:
+ query_triggers[query] = []
+ try:
+ query_triggers[query].append(future.result())
+ except Exception as e:
+ print(f"Warning: query failed: {e}", file=sys.stderr)
+ query_triggers[query].append(False)
+
+ for query, triggers in query_triggers.items():
+ item = query_items[query]
+ trigger_rate = sum(triggers) / len(triggers)
+ should_trigger = item["should_trigger"]
+ if should_trigger:
+ did_pass = trigger_rate >= trigger_threshold
+ else:
+ did_pass = trigger_rate < trigger_threshold
+ results.append({
+ "query": query,
+ "should_trigger": should_trigger,
+ "trigger_rate": trigger_rate,
+ "triggers": sum(triggers),
+ "runs": len(triggers),
+ "pass": did_pass,
+ })
+
+ passed = sum(1 for r in results if r["pass"])
+ total = len(results)
+
+ return {
+ "skill_name": skill_name,
+ "description": description,
+ "results": results,
+ "summary": {
+ "total": total,
+ "passed": passed,
+ "failed": total - passed,
+ },
+ }
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description")
+ parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
+ parser.add_argument("--skill-path", required=True, help="Path to skill directory")
+ parser.add_argument("--description", default=None, help="Override description to test")
+ parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
+ parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
+ parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
+ parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
+ parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)")
+ parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
+ args = parser.parse_args()
+
+ eval_set = json.loads(Path(args.eval_set).read_text())
+ skill_path = Path(args.skill_path)
+
+ if not (skill_path / "SKILL.md").exists():
+ print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
+ sys.exit(1)
+
+ name, original_description, content = parse_skill_md(skill_path)
+ description = args.description or original_description
+ project_root = find_project_root()
+
+ if args.verbose:
+ print(f"Evaluating: {description}", file=sys.stderr)
+
+ output = run_eval(
+ eval_set=eval_set,
+ skill_name=name,
+ description=description,
+ num_workers=args.num_workers,
+ timeout=args.timeout,
+ project_root=project_root,
+ runs_per_query=args.runs_per_query,
+ trigger_threshold=args.trigger_threshold,
+ model=args.model,
+ )
+
+ if args.verbose:
+ summary = output["summary"]
+ print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr)
+ for r in output["results"]:
+ status = "PASS" if r["pass"] else "FAIL"
+ rate_str = f"{r['triggers']}/{r['runs']}"
+ print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr)
+
+ print(json.dumps(output, indent=2))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/run_loop.py b/.claude/skills/skill-creator/scripts/run_loop.py
new file mode 100755
index 000000000..30a263d67
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/run_loop.py
@@ -0,0 +1,328 @@
+#!/usr/bin/env python3
+"""Run the eval + improve loop until all pass or max iterations reached.
+
+Combines run_eval.py and improve_description.py in a loop, tracking history
+and returning the best description found. Supports train/test split to prevent
+overfitting.
+"""
+
+import argparse
+import json
+import random
+import sys
+import tempfile
+import time
+import webbrowser
+from pathlib import Path
+
+from scripts.generate_report import generate_html
+from scripts.improve_description import improve_description
+from scripts.run_eval import find_project_root, run_eval
+from scripts.utils import parse_skill_md
+
+
+def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]:
+ """Split eval set into train and test sets, stratified by should_trigger."""
+ random.seed(seed)
+
+ # Separate by should_trigger
+ trigger = [e for e in eval_set if e["should_trigger"]]
+ no_trigger = [e for e in eval_set if not e["should_trigger"]]
+
+ # Shuffle each group
+ random.shuffle(trigger)
+ random.shuffle(no_trigger)
+
+ # Calculate split points
+ n_trigger_test = max(1, int(len(trigger) * holdout))
+ n_no_trigger_test = max(1, int(len(no_trigger) * holdout))
+
+ # Split
+ test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test]
+ train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:]
+
+ return train_set, test_set
+
+
+def run_loop(
+ eval_set: list[dict],
+ skill_path: Path,
+ description_override: str | None,
+ num_workers: int,
+ timeout: int,
+ max_iterations: int,
+ runs_per_query: int,
+ trigger_threshold: float,
+ holdout: float,
+ model: str,
+ verbose: bool,
+ live_report_path: Path | None = None,
+ log_dir: Path | None = None,
+) -> dict:
+ """Run the eval + improvement loop."""
+ project_root = find_project_root()
+ name, original_description, content = parse_skill_md(skill_path)
+ current_description = description_override or original_description
+
+ # Split into train/test if holdout > 0
+ if holdout > 0:
+ train_set, test_set = split_eval_set(eval_set, holdout)
+ if verbose:
+ print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr)
+ else:
+ train_set = eval_set
+ test_set = []
+
+ history = []
+ exit_reason = "unknown"
+
+ for iteration in range(1, max_iterations + 1):
+ if verbose:
+ print(f"\n{'='*60}", file=sys.stderr)
+ print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr)
+ print(f"Description: {current_description}", file=sys.stderr)
+ print(f"{'='*60}", file=sys.stderr)
+
+ # Evaluate train + test together in one batch for parallelism
+ all_queries = train_set + test_set
+ t0 = time.time()
+ all_results = run_eval(
+ eval_set=all_queries,
+ skill_name=name,
+ description=current_description,
+ num_workers=num_workers,
+ timeout=timeout,
+ project_root=project_root,
+ runs_per_query=runs_per_query,
+ trigger_threshold=trigger_threshold,
+ model=model,
+ )
+ eval_elapsed = time.time() - t0
+
+ # Split results back into train/test by matching queries
+ train_queries_set = {q["query"] for q in train_set}
+ train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set]
+ test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set]
+
+ train_passed = sum(1 for r in train_result_list if r["pass"])
+ train_total = len(train_result_list)
+ train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total}
+ train_results = {"results": train_result_list, "summary": train_summary}
+
+ if test_set:
+ test_passed = sum(1 for r in test_result_list if r["pass"])
+ test_total = len(test_result_list)
+ test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total}
+ test_results = {"results": test_result_list, "summary": test_summary}
+ else:
+ test_results = None
+ test_summary = None
+
+ history.append({
+ "iteration": iteration,
+ "description": current_description,
+ "train_passed": train_summary["passed"],
+ "train_failed": train_summary["failed"],
+ "train_total": train_summary["total"],
+ "train_results": train_results["results"],
+ "test_passed": test_summary["passed"] if test_summary else None,
+ "test_failed": test_summary["failed"] if test_summary else None,
+ "test_total": test_summary["total"] if test_summary else None,
+ "test_results": test_results["results"] if test_results else None,
+ # For backward compat with report generator
+ "passed": train_summary["passed"],
+ "failed": train_summary["failed"],
+ "total": train_summary["total"],
+ "results": train_results["results"],
+ })
+
+ # Write live report if path provided
+ if live_report_path:
+ partial_output = {
+ "original_description": original_description,
+ "best_description": current_description,
+ "best_score": "in progress",
+ "iterations_run": len(history),
+ "holdout": holdout,
+ "train_size": len(train_set),
+ "test_size": len(test_set),
+ "history": history,
+ }
+ live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name))
+
+ if verbose:
+ def print_eval_stats(label, results, elapsed):
+ pos = [r for r in results if r["should_trigger"]]
+ neg = [r for r in results if not r["should_trigger"]]
+ tp = sum(r["triggers"] for r in pos)
+ pos_runs = sum(r["runs"] for r in pos)
+ fn = pos_runs - tp
+ fp = sum(r["triggers"] for r in neg)
+ neg_runs = sum(r["runs"] for r in neg)
+ tn = neg_runs - fp
+ total = tp + tn + fp + fn
+ precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0
+ recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0
+ accuracy = (tp + tn) / total if total > 0 else 0.0
+ print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr)
+ for r in results:
+ status = "PASS" if r["pass"] else "FAIL"
+ rate_str = f"{r['triggers']}/{r['runs']}"
+ print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr)
+
+ print_eval_stats("Train", train_results["results"], eval_elapsed)
+ if test_summary:
+ print_eval_stats("Test ", test_results["results"], 0)
+
+ if train_summary["failed"] == 0:
+ exit_reason = f"all_passed (iteration {iteration})"
+ if verbose:
+ print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr)
+ break
+
+ if iteration == max_iterations:
+ exit_reason = f"max_iterations ({max_iterations})"
+ if verbose:
+ print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr)
+ break
+
+ # Improve the description based on train results
+ if verbose:
+ print(f"\nImproving description...", file=sys.stderr)
+
+ t0 = time.time()
+ # Strip test scores from history so improvement model can't see them
+ blinded_history = [
+ {k: v for k, v in h.items() if not k.startswith("test_")}
+ for h in history
+ ]
+ new_description = improve_description(
+ skill_name=name,
+ skill_content=content,
+ current_description=current_description,
+ eval_results=train_results,
+ history=blinded_history,
+ model=model,
+ log_dir=log_dir,
+ iteration=iteration,
+ )
+ improve_elapsed = time.time() - t0
+
+ if verbose:
+ print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr)
+
+ current_description = new_description
+
+ # Find the best iteration by TEST score (or train if no test set)
+ if test_set:
+ best = max(history, key=lambda h: h["test_passed"] or 0)
+ best_score = f"{best['test_passed']}/{best['test_total']}"
+ else:
+ best = max(history, key=lambda h: h["train_passed"])
+ best_score = f"{best['train_passed']}/{best['train_total']}"
+
+ if verbose:
+ print(f"\nExit reason: {exit_reason}", file=sys.stderr)
+ print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr)
+
+ return {
+ "exit_reason": exit_reason,
+ "original_description": original_description,
+ "best_description": best["description"],
+ "best_score": best_score,
+ "best_train_score": f"{best['train_passed']}/{best['train_total']}",
+ "best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None,
+ "final_description": current_description,
+ "iterations_run": len(history),
+ "holdout": holdout,
+ "train_size": len(train_set),
+ "test_size": len(test_set),
+ "history": history,
+ }
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Run eval + improve loop")
+ parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file")
+ parser.add_argument("--skill-path", required=True, help="Path to skill directory")
+ parser.add_argument("--description", default=None, help="Override starting description")
+ parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers")
+ parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds")
+ parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations")
+ parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query")
+ parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold")
+ parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)")
+ parser.add_argument("--model", required=True, help="Model for improvement")
+ parser.add_argument("--verbose", action="store_true", help="Print progress to stderr")
+ parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)")
+ parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here")
+ args = parser.parse_args()
+
+ eval_set = json.loads(Path(args.eval_set).read_text())
+ skill_path = Path(args.skill_path)
+
+ if not (skill_path / "SKILL.md").exists():
+ print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr)
+ sys.exit(1)
+
+ name, _, _ = parse_skill_md(skill_path)
+
+ # Set up live report path
+ if args.report != "none":
+ if args.report == "auto":
+ timestamp = time.strftime("%Y%m%d_%H%M%S")
+ live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html"
+ else:
+ live_report_path = Path(args.report)
+ # Open the report immediately so the user can watch
+ live_report_path.write_text("
Starting optimization loop...
")
+ webbrowser.open(str(live_report_path))
+ else:
+ live_report_path = None
+
+ # Determine output directory (create before run_loop so logs can be written)
+ if args.results_dir:
+ timestamp = time.strftime("%Y-%m-%d_%H%M%S")
+ results_dir = Path(args.results_dir) / timestamp
+ results_dir.mkdir(parents=True, exist_ok=True)
+ else:
+ results_dir = None
+
+ log_dir = results_dir / "logs" if results_dir else None
+
+ output = run_loop(
+ eval_set=eval_set,
+ skill_path=skill_path,
+ description_override=args.description,
+ num_workers=args.num_workers,
+ timeout=args.timeout,
+ max_iterations=args.max_iterations,
+ runs_per_query=args.runs_per_query,
+ trigger_threshold=args.trigger_threshold,
+ holdout=args.holdout,
+ model=args.model,
+ verbose=args.verbose,
+ live_report_path=live_report_path,
+ log_dir=log_dir,
+ )
+
+ # Save JSON output
+ json_output = json.dumps(output, indent=2)
+ print(json_output)
+ if results_dir:
+ (results_dir / "results.json").write_text(json_output)
+
+ # Write final HTML report (without auto-refresh)
+ if live_report_path:
+ live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name))
+ print(f"\nReport: {live_report_path}", file=sys.stderr)
+
+ if results_dir and live_report_path:
+ (results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name))
+
+ if results_dir:
+ print(f"Results saved to: {results_dir}", file=sys.stderr)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/.claude/skills/skill-creator/scripts/utils.py b/.claude/skills/skill-creator/scripts/utils.py
new file mode 100644
index 000000000..51b6a07dd
--- /dev/null
+++ b/.claude/skills/skill-creator/scripts/utils.py
@@ -0,0 +1,47 @@
+"""Shared utilities for skill-creator scripts."""
+
+from pathlib import Path
+
+
+
+def parse_skill_md(skill_path: Path) -> tuple[str, str, str]:
+ """Parse a SKILL.md file, returning (name, description, full_content)."""
+ content = (skill_path / "SKILL.md").read_text()
+ lines = content.split("\n")
+
+ if lines[0].strip() != "---":
+ raise ValueError("SKILL.md missing frontmatter (no opening ---)")
+
+ end_idx = None
+ for i, line in enumerate(lines[1:], start=1):
+ if line.strip() == "---":
+ end_idx = i
+ break
+
+ if end_idx is None:
+ raise ValueError("SKILL.md missing frontmatter (no closing ---)")
+
+ name = ""
+ description = ""
+ frontmatter_lines = lines[1:end_idx]
+ i = 0
+ while i < len(frontmatter_lines):
+ line = frontmatter_lines[i]
+ if line.startswith("name:"):
+ name = line[len("name:"):].strip().strip('"').strip("'")
+ elif line.startswith("description:"):
+ value = line[len("description:"):].strip()
+ # Handle YAML multiline indicators (>, |, >-, |-)
+ if value in (">", "|", ">-", "|-"):
+ continuation_lines: list[str] = []
+ i += 1
+ while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")):
+ continuation_lines.append(frontmatter_lines[i].strip())
+ i += 1
+ description = " ".join(continuation_lines)
+ continue
+ else:
+ description = value.strip('"').strip("'")
+ i += 1
+
+ return name, description, content
From d4caba09678f887b49b7dee3f121a99f21238e7e Mon Sep 17 00:00:00 2001
From: simianastronaut
Date: Sat, 7 Mar 2026 21:39:31 -0500
Subject: [PATCH 03/35] Removing extraneous required fields from Github PRs
---
.github/ISSUE_TEMPLATE/bug_report.yml | 20 +++++---------------
.github/ISSUE_TEMPLATE/feature_request.yml | 16 ++++++++--------
2 files changed, 13 insertions(+), 23 deletions(-)
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 80811388d..9f10edfe0 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -11,15 +11,6 @@ body:
Please provide a minimal reproducible case so maintainers can triage quickly.
Do not include personal/sensitive data; redact and anonymize all logs/payloads.
- - type: input
- id: summary
- attributes:
- label: Summary
- description: One-line description of the problem.
- placeholder: zeroclaw daemon exits immediately when ...
- validations:
- required: true
-
- type: dropdown
id: component
attributes:
@@ -83,13 +74,13 @@ body:
id: impact
attributes:
label: Impact
- description: Who is affected, how often, and practical consequences.
+ description: Who is affected, how often, and practical consequences (optional but helps triage).
placeholder: |
Affected users: ...
Frequency: always/intermittent
Consequence: ...
validations:
- required: true
+ required: false
- type: textarea
id: logs
@@ -112,9 +103,10 @@ body:
id: rust
attributes:
label: Rust version
+ description: Required for runtime/build bugs; optional for docs/config issues.
placeholder: rustc 1.xx.x
validations:
- required: true
+ required: false
- type: input
id: os
@@ -142,7 +134,5 @@ body:
options:
- label: I reproduced this on the latest master branch or latest release.
required: true
- - label: I redacted secrets/tokens from logs.
- required: true
- - label: I removed personal identifiers and replaced identity-specific data with neutral placeholders.
+ - label: I redacted secrets, tokens, and personal data from all submitted content.
required: true
diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml
index 25fa32b43..f1c07a15f 100644
--- a/.github/ISSUE_TEMPLATE/feature_request.yml
+++ b/.github/ISSUE_TEMPLATE/feature_request.yml
@@ -42,10 +42,10 @@ body:
id: non_goals
attributes:
label: Non-goals / out of scope
- description: Clarify what should not be included in the first iteration.
+ description: Clarify what should not be included in the first iteration (optional but helps scope discussion).
placeholder: No UI changes, no cross-provider dynamic adaptation in v1.
validations:
- required: true
+ required: false
- type: textarea
id: alternatives
@@ -60,31 +60,31 @@ body:
id: acceptance
attributes:
label: Acceptance criteria
- description: What outcomes would make this request complete?
+ description: What outcomes would make this request complete? (optional — can be defined during triage)
placeholder: |
- Config key is documented and validated
- Runtime path uses configured retry budget
- Regression tests cover fallback and invalid config
validations:
- required: true
+ required: false
- type: textarea
id: architecture
attributes:
label: Architecture impact
- description: Which subsystem(s) are affected?
+ description: Which subsystem(s) are affected? (optional — maintainers will assess during triage)
placeholder: providers/, channels/, memory/, runtime/, security/, docs/ ...
validations:
- required: true
+ required: false
- type: textarea
id: risk
attributes:
label: Risk and rollback
- description: Main risk + how to disable/revert quickly.
+ description: Main risk + how to disable/revert quickly (optional — can be defined during planning).
placeholder: Risk is ... rollback is ...
validations:
- required: true
+ required: false
- type: dropdown
id: breaking
From a7e295c966358073b7d0fe107f4683814da47680 Mon Sep 17 00:00:00 2001
From: simianastronaut
Date: Sat, 7 Mar 2026 21:48:58 -0500
Subject: [PATCH 04/35] Skill for making PRs in the proper format
---
.claude/skills/github-issue/SKILL.md | 133 +++++++++++++++++++++++++++
1 file changed, 133 insertions(+)
create mode 100644 .claude/skills/github-issue/SKILL.md
diff --git a/.claude/skills/github-issue/SKILL.md b/.claude/skills/github-issue/SKILL.md
new file mode 100644
index 000000000..2f793ca36
--- /dev/null
+++ b/.claude/skills/github-issue/SKILL.md
@@ -0,0 +1,133 @@
+# Skill: github-issue
+
+File a structured GitHub issue (bug report or feature request) for ZeroClaw interactively from Claude Code.
+
+## When to Use
+
+Trigger when the user wants to file a GitHub issue, report a bug, or request a feature for ZeroClaw. Keywords: "file issue", "report bug", "feature request", "open issue", "create issue", "github issue".
+
+## Instructions
+
+You are filing a GitHub issue against the ZeroClaw repository using structured issue forms. Follow this workflow exactly.
+
+### Step 1: Detect Issue Type and Read the Template
+
+Determine from the user's message whether this is a **bug report** or **feature request**.
+- If unclear, use AskUserQuestion to ask: "Is this a bug report or a feature request?"
+
+Then read the corresponding issue template to understand the required fields:
+
+- Bug report: `.github/ISSUE_TEMPLATE/bug_report.yml`
+- Feature request: `.github/ISSUE_TEMPLATE/feature_request.yml`
+
+Parse the YAML to extract:
+- The `title` prefix (e.g. `[Bug]: `, `[Feature]: `)
+- The `labels` array
+- Each field in the `body` array: its `type` (dropdown, textarea, input, checkboxes, markdown), `id`, `attributes.label`, `attributes.options` (for dropdowns), `attributes.description`, `attributes.placeholder`, and `validations.required`
+
+This is the source of truth for what fields exist, what they're called, what options are available, and which are required. Do not assume or hardcode any field names or options — always derive them from the template file.
+
+### Step 2: Auto-Gather Context
+
+Before asking the user anything, silently gather environment and repo context:
+
+```bash
+# Git context
+git log --oneline -5
+git status --short
+git diff --stat HEAD~1 2>/dev/null
+
+# For bug reports — environment detection
+uname -s -r -m # OS info
+sw_vers 2>/dev/null # macOS version
+rustc --version 2>/dev/null # Rust version
+cargo metadata --format-version=1 --no-deps 2>/dev/null | jq -r '.packages[] | select(.name=="zeroclaw") | .version' 2>/dev/null # ZeroClaw version
+git rev-parse --short HEAD # commit SHA fallback
+```
+
+Also read recently changed files to infer the affected component and architecture impact.
+
+### Step 3: Pre-Fill and Present the Form
+
+Using the parsed template fields and gathered context, draft values for ALL fields from the template:
+
+- **dropdown** fields: select the most likely option from `attributes.options` based on context. For dropdowns where you're uncertain, note your best guess and flag it for the user.
+- **textarea** fields: draft content based on the user's description, git context, and the field's `attributes.description`/`attributes.placeholder` for guidance on what's expected.
+- **input** fields: fill with auto-detected values (versions, OS) or draft from user context.
+- **checkboxes** fields: auto-check all items (the skill itself ensures compliance with the stated checks).
+- **markdown** fields: skip these — they're informational headers, not form inputs.
+- **optional fields** (where `validations.required` is false): fill if there's enough context, otherwise note "(optional — not enough context to fill)".
+
+Present the complete draft to the user in a clean readable format:
+
+```
+## Issue Draft: [Bug]: / [Feature]:
+**Labels**:
+
+###
+
+
+###
+
+...
+```
+
+Use AskUserQuestion to ask the user to review:
+- "Here's the pre-filled issue. Please review and let me know what to change, or say 'submit' to file it."
+
+If the user requests changes, update the draft and re-present. Iterate until the user approves.
+
+### Step 4: Scope Guard
+
+Before final submission, analyze the collected content for scope creep:
+- Does the bug report describe multiple independent defects?
+- Does the feature request bundle unrelated changes?
+
+If multi-concept issues are detected:
+1. Inform the user: "This issue appears to cover multiple distinct topics. Focused, single-concept issues are strongly preferred and more likely to be accepted."
+2. Break down the distinct groups found.
+3. Offer to file separate issues for each group, reusing shared context (environment, etc.).
+4. Let the user decide: proceed as-is or split.
+
+### Step 5: Construct Issue Body
+
+Build the issue body as markdown sections matching GitHub's form-field rendering format. GitHub renders form-submitted issues with `### ` sections, so use that exact structure.
+
+For each non-markdown field from the template, in order:
+
+```markdown
+###
+
+
+```
+
+For optional fields with no content, use `_No response_` as the value (this matches GitHub's native rendering for empty optional fields).
+
+For checkbox fields, render each option as:
+```markdown
+- [X]
diff --git a/README.nb.md b/README.nb.md
new file mode 100644
index 000000000..323c536a3
--- /dev/null
+++ b/README.nb.md
@@ -0,0 +1,179 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Null overhead. Null kompromiss. 100% Rust. 100% Agnostisk.
+ ⚡️ Kjører på $10 maskinvare med <5MB RAM: Det er 99% mindre minne enn OpenClaw og 98% billigere enn en Mac mini!
+
+
+---
+
+## Hva er ZeroClaw?
+
+ZeroClaw er en lettvektig, foranderlig og utvidbar AI-assistent-infrastruktur bygget i Rust. Den kobler sammen ulike LLM-leverandører (Anthropic, OpenAI, Google, Ollama osv.) via et samlet grensesnitt og støtter flere kanaler (Telegram, Matrix, CLI osv.).
+
+### Hovedfunksjoner
+
+- **🦀 Skrevet i Rust**: Høy ytelse, minnesikkerhet og nullkostnads-abstraksjoner
+- **🔌 Leverandør-agnostisk**: Støtter OpenAI, Anthropic, Google Gemini, Ollama og andre
+- **📱 Multi-kanal**: Telegram, Matrix (med E2EE), CLI og andre
+- **🧠 Pluggbart minne**: SQLite og Markdown-backends
+- **🛠️ Utvidbare verktøy**: Legg til tilpassede verktøy enkelt
+- **🔒 Sikkerhet først**: Omvendt proxy, personvern-først design
+
+---
+
+## Rask Start
+
+### Krav
+
+- Rust 1.70+
+- En LLM-leverandør API-nøkkel (Anthropic, OpenAI osv.)
+
+### Installasjon
+
+```bash
+# Klon repository
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# Bygg
+cargo build --release
+
+# Kjør
+cargo run --release
+```
+
+### Med Docker
+
+```bash
+docker run -d \
+ --name zeroclaw \
+ -e ANTHROPIC_API_KEY=your_key \
+ -v zeroclaw-data:/app/data \
+ zeroclaw/zeroclaw:latest
+```
+
+---
+
+## Konfigurasjon
+
+ZeroClaw bruker en YAML-konfigurasjonsfil. Som standard ser den etter `config.yaml`.
+
+```yaml
+# Standardleverandør
+provider: anthropic
+
+# Leverandørkonfigurasjon
+providers:
+ anthropic:
+ api_key: ${ANTHROPIC_API_KEY}
+ model: claude-3-5-sonnet-20241022
+ openai:
+ api_key: ${OPENAI_API_KEY}
+ model: gpt-4o
+
+# Minnekonfigurasjon
+memory:
+ backend: sqlite
+ path: data/memory.db
+
+# Kanalkonfigurasjon
+channels:
+ telegram:
+ token: ${TELEGRAM_BOT_TOKEN}
+```
+
+---
+
+## Dokumentasjon
+
+For detaljert dokumentasjon, se:
+
+- [Dokumentasjonshub](docs/README.md)
+- [Kommandoreferanse](docs/commands-reference.md)
+- [Leverandørreferanse](docs/providers-reference.md)
+- [Kanalreferanse](docs/channels-reference.md)
+- [Konfigurasjonsreferanse](docs/config-reference.md)
+
+---
+
+## Bidrag
+
+Bidrag er velkomne! Vennligst les [Bidragsguiden](CONTRIBUTING.md).
+
+---
+
+## Lisens
+
+Dette prosjektet er dobbelt-lisensiert:
+
+- MIT License
+- Apache License, versjon 2.0
+
+Se [LICENSE-APACHE](LICENSE-APACHE) og [LICENSE-MIT](LICENSE-MIT) for detaljer.
+
+---
+
+## Fellesskap
+
+- [Telegram](https://t.me/zeroclawlabs)
+- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
+- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
+
+---
+
+## Sponsorer
+
+Hvis ZeroClaw er nyttig for deg, vennligst vurder å kjøpe oss en kaffe:
+
+[](https://buymeacoffee.com/argenistherose)
diff --git a/README.nl.md b/README.nl.md
new file mode 100644
index 000000000..570bf67ad
--- /dev/null
+++ b/README.nl.md
@@ -0,0 +1,914 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Nul overhead. Nul compromis. 100% Rust. 100% Agnostisch.
+ ⚡️ Draait op $10 hardware met <5MB RAM: Dat is 99% minder geheugen dan OpenClaw en 98% goedkoper dan een Mac mini!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Gebouwd door studenten en leden van de Harvard, MIT en Sundai.Club gemeenschappen.
+
+ ZeroClaw is het runtime besturingssysteem voor agent workflows — een infrastructuur die modellen, tools, geheugen en uitvoering abstraheert om agenten één keer te bouwen en overal uit te voeren.
+
+
+
Trait-gedreven architectuur · veilige runtime standaard · verwisselbare provider/kanaal/tool · alles is plugbaar
+
+### 📢 Aankondigingen
+
+Gebruik deze tabel voor belangrijke aankondigingen (compatibiliteitswijzigingen, beveiligingsberichten, onderhoudsvensters en versieblokkades).
+
+| Datum (UTC) | Niveau | Aankondiging | Actie |
+| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 2026-02-19 | _Kritiek_ | **We zijn niet gelieerd** met `openagen/zeroclaw` of `zeroclaw.org`. Het domein `zeroclaw.org` wijst momenteel naar de fork `openagen/zeroclaw`, en dit domein/repository imiteert onze officiële website/project. | Vertrouw geen informatie, binaire bestanden, fondsenwerving of aankondigingen van deze bronnen. Gebruik alleen [deze repository](https://github.com/zeroclaw-labs/zeroclaw) en onze geverifieerde sociale media accounts. |
+| 2026-02-21 | _Belangrijk_ | Onze officiële website is nu online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bedankt voor je geduld tijdens het wachten. We detecteren nog steeds imitatiepogingen: neem niet deel aan enige investering/fondsenwerving activiteit in naam van ZeroClaw als deze niet via onze officiële kanalen wordt gepubliceerd. | Gebruik [deze repository](https://github.com/zeroclaw-labs/zeroclaw) als de enige bron van waarheid. Volg [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (groep)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), en [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) voor officiële updates. |
+| 2026-02-19 | _Belangrijk_ | Anthropic heeft de gebruiksvoorwaarden voor authenticatie en inloggegevens bijgewerkt op 2026-02-19. OAuth authenticatie (Free, Pro, Max) is exclusief voor Claude Code en Claude.ai; het gebruik van Claude Free/Pro/Max OAuth tokens in enig ander product, tool of service (inclusief Agent SDK) is niet toegestaan en kan in strijd zijn met de Consumenten Gebruiksvoorwaarden. | Vermijd tijdelijk Claude Code OAuth integraties om potentiële verliezen te voorkomen. Originele clausule: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
+
+### ✨ Functies
+
+- 🏎️ **Lichtgewicht Runtime Standaard:** Veelvoorkomende CLI workflows en statuscommando's draaien binnen een geheugenruimte van enkele megabytes in productie builds.
+- 💰 **Kosteneffectieve Implementatie:** Ontworpen voor goedkope boards en kleine cloud instanties zonder zware runtime afhankelijkheden.
+- ⚡ **Snelle Koude Starts:** De single-binary Rust runtime houdt commando en daemon starts bijna direct voor dagelijkse operaties.
+- 🌍 **Draagbare Architectuur:** Een single-binary workflow op ARM, x86 en RISC-V met verwisselbare provider/kanaal/tool.
+
+### Waarom teams kiezen voor ZeroClaw
+
+- **Lichtgewicht standaard:** kleine Rust binary, snelle start, laag geheugengebruik.
+- **Veilig door design:** pairing, strikte sandboxing, expliciete allowlists, workspace scope.
+- **Volledig verwisselbaar:** kernsystemen zijn traits (providers, kanalen, tools, geheugen, tunnels).
+- **Geen vendor lock-in:** OpenAI-compatibele provider ondersteuning + plugbare custom endpoints.
+
+## Benchmark Snapshot (ZeroClaw vs OpenClaw, Reproduceerbaar)
+
+Snelle benchmark op lokale machine (macOS arm64, feb. 2026) genormaliseerd voor 0.8 GHz edge hardware.
+
+| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
+| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
+| **Taal** | TypeScript | Python | Go | **Rust** |
+| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
+| **Start (0.8 GHz core)** | > 500s | > 30s | < 1s | **< 10ms** |
+| **Binary Grootte** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
+| **Kosten** | Mac Mini $599 | Linux SBC ~$50 | Linux board $10 | **Elke hardware $10** |
+
+> Opmerkingen: ZeroClaw resultaten worden gemeten op productie builds met `/usr/bin/time -l`. OpenClaw vereist de Node.js runtime (typisch ~390 MB extra geheugen overhead), terwijl NanoBot de Python runtime vereist. PicoClaw en ZeroClaw zijn statische binaries. De bovenstaande RAM cijfers zijn runtime geheugen; build-time compilatievereisten zijn hoger.
+
+
+
+
+
+### Reproduceerbare Lokale Meting
+
+Benchmark beweringen kunnen afwijken naarmate code en toolchains evolueren, dus meet altijd je huidige build lokaal:
+
+```bash
+cargo build --release
+ls -lh target/release/zeroclaw
+
+/usr/bin/time -l target/release/zeroclaw --help
+/usr/bin/time -l target/release/zeroclaw status
+```
+
+Voorbeeld monster (macOS arm64, gemeten op 18 februari 2026):
+
+- Release binary grootte: `8.8M`
+- `zeroclaw --help`: werkelijke tijd ongeveer `0.02s`, piek geheugengebruik ~`3.9 MB`
+- `zeroclaw status`: werkelijke tijd ongeveer `0.01s`, piek geheugengebruik ~`4.1 MB`
+
+## Vereisten
+
+
+Windows
+
+### Windows — Vereist
+
+1. **Visual Studio Build Tools** (levert MSVC linker en Windows SDK):
+
+ ```powershell
+ winget install Microsoft.VisualStudio.2022.BuildTools
+ ```
+
+ Selecteer tijdens de installatie (of via Visual Studio Installer) de **"Desktop development with C++"** workload.
+
+2. **Rust Toolchain:**
+
+ ```powershell
+ winget install Rustlang.Rustup
+ ```
+
+ Na installatie, open een nieuwe terminal en voer `rustup default stable` uit om ervoor te zorgen dat de stabiele toolchain actief is.
+
+3. **Verifieer** dat beide werken:
+ ```powershell
+ rustc --version
+ cargo --version
+ ```
+
+### Windows — Optioneel
+
+- **Docker Desktop** — alleen vereist als je de [Docker sandboxed runtime](#huidige-runtime-ondersteuning) gebruikt (`runtime.kind = "docker"`). Installeer via `winget install Docker.DockerDesktop`.
+
+
+
+
+Linux / macOS
+
+### Linux / macOS — Vereist
+
+1. **Essentiële build tools:**
+ - **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
+ - **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
+ - **macOS:** Installeer Xcode Command Line Tools: `xcode-select --install`
+
+2. **Rust Toolchain:**
+
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+
+ Zie [rustup.rs](https://rustup.rs) voor details.
+
+3. **Verifieer:**
+ ```bash
+ rustc --version
+ cargo --version
+ ```
+
+### Linux / macOS — Optioneel
+
+- **Docker** — alleen vereist als je de [Docker sandboxed runtime](#huidige-runtime-ondersteuning) gebruikt (`runtime.kind = "docker"`).
+ - **Linux (Debian/Ubuntu):** zie [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
+ - **Linux (Fedora/RHEL):** zie [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
+ - **macOS:** installeer Docker Desktop via [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
+
+
+
+## Snelle Start
+
+### Optie 1: Geautomatiseerde setup (aanbevolen)
+
+Het `bootstrap.sh` script installeert Rust, kloont ZeroClaw, compileert het, en stelt je initiële ontwikkelomgeving in:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
+```
+
+Dit zal:
+
+1. Rust installeren (indien afwezig)
+2. De ZeroClaw repository klonen
+3. ZeroClaw compileren in release modus
+4. `zeroclaw` installeren in `~/.cargo/bin/`
+5. De standaard workspace structuur maken in `~/.zeroclaw/workspace/`
+6. Een initiële configuratie `~/.zeroclaw/workspace/config.toml` genereren
+
+Na de bootstrap, herlaad je shell of voer `source ~/.cargo/env` uit om het `zeroclaw` commando globaal te gebruiken.
+
+### Optie 2: Handmatige installatie
+
+
+Klik om handmatige installatiestappen te zien
+
+```bash
+# 1. Kloon de repository
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# 2. Compileer in release
+cargo build --release --locked
+
+# 3. Installeer de binary
+cargo install --path . --locked
+
+# 4. Initialiseer de workspace
+zeroclaw init
+
+# 5. Verifieer de installatie
+zeroclaw --version
+zeroclaw status
+```
+
+
+
+### Na Installatie
+
+Eenmaal geïnstalleerd (via bootstrap of handmatig), zou je moeten zien:
+
+```
+~/.zeroclaw/workspace/
+├── config.toml # Hoofdconfiguratie
+├── .pairing # Pairing geheimen (gegenereerd bij eerste lancering)
+├── logs/ # Daemon/agent logs
+├── skills/ # Aangepaste vaardigheden
+└── memory/ # Gesprekscontext opslag
+```
+
+**Volgende stappen:**
+
+1. Configureer je AI providers in `~/.zeroclaw/workspace/config.toml`
+2. Bekijk de [configuratie referentie](docs/config-reference.md) voor geavanceerde opties
+3. Start de agent: `zeroclaw agent start`
+4. Test via je voorkeurskanaal (zie [kanalen referentie](docs/channels-reference.md))
+
+## Configuratie
+
+Bewerk `~/.zeroclaw/workspace/config.toml` om providers, kanalen en systeemgedrag te configureren.
+
+### Snelle Configuratie Referentie
+
+```toml
+[providers.anthropic]
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+api_key = "sk-..."
+model = "gpt-4o"
+
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@bot:matrix.org"
+password = "..."
+
+[memory]
+kind = "markdown" # of "sqlite" of "none"
+
+[runtime]
+kind = "native" # of "docker" (vereist Docker)
+```
+
+**Volledige referentie documenten:**
+
+- [Configuratie Referentie](docs/config-reference.md) — alle instellingen, validaties, standaardwaarden
+- [Providers Referentie](docs/providers-reference.md) — AI provider-specifieke configuraties
+- [Kanalen Referentie](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord en meer
+- [Operations](docs/operations-runbook.md) — productie monitoring, geheim rotatie, schaling
+
+### Huidige Runtime Ondersteuning
+
+ZeroClaw ondersteunt twee code uitvoeringsbackends:
+
+- **`native`** (standaard) — directe procesuitvoering, snelste pad, ideaal voor vertrouwde omgevingen
+- **`docker`** — volledige container isolatie, versterkt beveiligingsbeleid, vereist Docker
+
+Gebruik `runtime.kind = "docker"` als je strikte sandboxing of netwerkisolatie nodig hebt. Zie [configuratie referentie](docs/config-reference.md#runtime) voor volledige details.
+
+## Commando's
+
+```bash
+# Workspace beheer
+zeroclaw init # Initialiseert een nieuwe workspace
+zeroclaw status # Toont daemon/agent status
+zeroclaw config validate # Verifieert config.toml syntax en waarden
+
+# Daemon beheer
+zeroclaw daemon start # Start de daemon in de achtergrond
+zeroclaw daemon stop # Stopt de draaiende daemon
+zeroclaw daemon restart # Herstart de daemon (config herladen)
+zeroclaw daemon logs # Toont daemon logs
+
+# Agent beheer
+zeroclaw agent start # Start de agent (vereist draaiende daemon)
+zeroclaw agent stop # Stopt de agent
+zeroclaw agent restart # Herstart de agent (config herladen)
+
+# Pairing operaties
+zeroclaw pairing init # Genereert een nieuw pairing geheim
+zeroclaw pairing rotate # Roteert het bestaande pairing geheim
+
+# Tunneling (voor publieke blootstelling)
+zeroclaw tunnel start # Start een tunnel naar de lokale daemon
+zeroclaw tunnel stop # Stopt de actieve tunnel
+
+# Diagnostiek
+zeroclaw doctor # Voert systeem gezondheidscontroles uit
+zeroclaw version # Toont versie en build informatie
+```
+
+Zie [Commando's Referentie](docs/commands-reference.md) voor volledige opties en voorbeelden.
+
+## Architectuur
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Kanalen (trait) │
+│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Agent Orchestrator │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Bericht │ │ Context │ │ Tool │ │
+│ │ Routing │ │ Geheugen │ │ Uitvoering │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ┌───────────────┼───────────────┐
+ ▼ ▼ ▼
+┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+│ Providers │ │ Geheugen │ │ Tools │
+│ (trait) │ │ (trait) │ │ (trait) │
+├──────────────┤ ├──────────────┤ ├──────────────┤
+│ Anthropic │ │ Markdown │ │ Filesystem │
+│ OpenAI │ │ SQLite │ │ Bash │
+│ Gemini │ │ None │ │ Web Fetch │
+│ Ollama │ │ Custom │ │ Custom │
+│ Custom │ └──────────────┘ └──────────────┘
+└──────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Runtime (trait) │
+│ Native │ Docker │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+**Belangrijkste principes:**
+
+- Alles is een **trait** — providers, kanalen, tools, geheugen, tunnels
+- Kanalen roepen de orchestrator aan; de orchestrator roept providers + tools aan
+- Het geheugensysteem beheert gesprekscontext (markdown, SQLite, of geen)
+- De runtime abstraheert code-uitvoering (native of Docker)
+- Geen provider lock-in — wissel Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama zonder codewijzigingen
+
+Zie [architectuur documentatie](docs/architecture.svg) voor gedetailleerde diagrammen en implementatiedetails.
+
+## Voorbeelden
+
+### Telegram Bot
+
+```toml
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+allowed_users = [987654321] # Je Telegram user ID
+```
+
+Start de daemon + agent, stuur dan een bericht naar je bot op Telegram:
+
+```
+/start
+Hallo! Zou je me kunnen helpen met het schrijven van een Python script?
+```
+
+De bot reageert met AI-gegenereerde code, voert tools uit indien gevraagd, en behoudt gesprekscontext.
+
+### Matrix (end-to-end encryptie)
+
+```toml
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@zeroclaw:matrix.org"
+password = "..."
+device_name = "zeroclaw-prod"
+e2ee_enabled = true
+```
+
+Nodig `@zeroclaw:matrix.org` uit in een versleutelde kamer, en de bot zal reageren met volledige encryptie. Zie [Matrix E2EE Gids](docs/matrix-e2ee-guide.md) voor apparaatverificatie setup.
+
+### Multi-Provider
+
+```toml
+[providers.anthropic]
+enabled = true
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+enabled = true
+api_key = "sk-..."
+model = "gpt-4o"
+
+[orchestrator]
+default_provider = "anthropic"
+fallback_providers = ["openai"] # Failover bij provider fout
+```
+
+Als Anthropic faalt of rate-limit heeft, schakelt de orchestrator automatisch over naar OpenAI.
+
+### Aangepast Geheugen
+
+```toml
+[memory]
+kind = "sqlite"
+path = "~/.zeroclaw/workspace/memory/conversations.db"
+retention_days = 90 # Automatische opruiming na 90 dagen
+```
+
+Of gebruik Markdown voor mens-leesbare opslag:
+
+```toml
+[memory]
+kind = "markdown"
+path = "~/.zeroclaw/workspace/memory/"
+```
+
+Zie [Configuratie Referentie](docs/config-reference.md#memory) voor alle geheugenopties.
+
+## Provider Ondersteuning
+
+| Provider | Status | API Sleutel | Voorbeeld Modellen |
+| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
+| **Anthropic** | ✅ Stabiel | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
+| **OpenAI** | ✅ Stabiel | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
+| **Google Gemini** | ✅ Stabiel | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
+| **Ollama** | ✅ Stabiel | N/A (lokaal) | `llama3.3`, `qwen2.5`, `phi4` |
+| **Cerebras** | ✅ Stabiel | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
+| **Groq** | ✅ Stabiel | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
+| **Mistral** | 🚧 Gepland | `MISTRAL_API_KEY` | TBD |
+| **Cohere** | 🚧 Gepland | `COHERE_API_KEY` | TBD |
+
+### Aangepaste Endpoints
+
+ZeroClaw ondersteunt OpenAI-compatibele endpoints:
+
+```toml
+[providers.custom]
+enabled = true
+api_key = "..."
+base_url = "https://api.your-llm-provider.com/v1"
+model = "your-model-name"
+```
+
+Voorbeeld: gebruik [LiteLLM](https://github.com/BerriAI/litellm) als proxy om toegang te krijgen tot elke LLM via de OpenAI interface.
+
+Zie [Providers Referentie](docs/providers-reference.md) voor volledige configuratiedetails.
+
+## Kanaal Ondersteuning
+
+| Kanaal | Status | Authenticatie | Opmerkingen |
+| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
+| **Telegram** | ✅ Stabiel | Bot Token | Volledige ondersteuning inclusief bestanden, afbeeldingen, inline knoppen |
+| **Matrix** | ✅ Stabiel | Wachtwoord of Token | E2EE ondersteuning met apparaatverificatie |
+| **Slack** | 🚧 Gepland | OAuth of Bot Token | Vereist workspace toegang |
+| **Discord** | 🚧 Gepland | Bot Token | Vereist guild permissies |
+| **WhatsApp** | 🚧 Gepland | Twilio of officiële API | Vereist business account |
+| **CLI** | ✅ Stabiel | Geen | Directe conversationele interface |
+| **Web** | 🚧 Gepland | API Sleutel of OAuth | Browser-gebaseerde chat interface |
+
+Zie [Kanalen Referentie](docs/channels-reference.md) voor volledige configuratie-instructies.
+
+## Tool Ondersteuning
+
+ZeroClaw biedt ingebouwde tools voor code-uitvoering, bestandssysteem toegang en web retrieval:
+
+| Tool | Beschrijving | Vereiste Runtime |
+| -------------------- | --------------------------- | ----------------------------- |
+| **bash** | Voert shell commando's uit | Native of Docker |
+| **python** | Voert Python scripts uit | Python 3.8+ (native) of Docker |
+| **javascript** | Voert Node.js code uit | Node.js 18+ (native) of Docker |
+| **filesystem_read** | Leest bestanden | Native of Docker |
+| **filesystem_write** | Schrijft bestanden | Native of Docker |
+| **web_fetch** | Haalt web inhoud op | Native of Docker |
+
+### Uitvoeringsbeveiliging
+
+- **Native Runtime** — draait als gebruikersproces van de daemon, volledige bestandssysteem toegang
+- **Docker Runtime** — volledige container isolatie, gescheiden bestandssystemen en netwerken
+
+Configureer het uitvoeringsbeleid in `config.toml`:
+
+```toml
+[runtime]
+kind = "docker"
+allowed_tools = ["bash", "python", "filesystem_read"] # Expliciete allowlist
+```
+
+Zie [Configuratie Referentie](docs/config-reference.md#runtime) voor volledige beveiligingsopties.
+
+## Implementatie
+
+### Lokale Implementatie (Ontwikkeling)
+
+```bash
+zeroclaw daemon start
+zeroclaw agent start
+```
+
+### Server Implementatie (Productie)
+
+Gebruik systemd om daemon en agent als services te beheren:
+
+```bash
+# Installeer de binary
+cargo install --path . --locked
+
+# Configureer de workspace
+zeroclaw init
+
+# Maak systemd service bestanden
+sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
+sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
+
+# Schakel in en start de services
+sudo systemctl enable zeroclaw-daemon zeroclaw-agent
+sudo systemctl start zeroclaw-daemon zeroclaw-agent
+
+# Verifieer de status
+sudo systemctl status zeroclaw-daemon
+sudo systemctl status zeroclaw-agent
+```
+
+Zie [Netwerk Implementatie Gids](docs/network-deployment.md) voor volledige productie-implementatie instructies.
+
+### Docker
+
+```bash
+# Bouw de image
+docker build -t zeroclaw:latest .
+
+# Draai de container
+docker run -d \
+ --name zeroclaw \
+ -v ~/.zeroclaw/workspace:/workspace \
+ -e ANTHROPIC_API_KEY=sk-ant-... \
+ zeroclaw:latest
+```
+
+Zie [`Dockerfile`](Dockerfile) voor bouw-details en configuratie-opties.
+
+### Edge Hardware
+
+ZeroClaw is ontworpen om te draaien op laagvermogen hardware:
+
+- **Raspberry Pi Zero 2 W** — ~512 MB RAM, enkele ARMv8 core, < $5 hardware kosten
+- **Raspberry Pi 4/5** — 1 GB+ RAM, multi-core, ideaal voor gelijktijdige workloads
+- **Orange Pi Zero 2** — ~512 MB RAM, quad-core ARMv8, ultra-lage kosten
+- **x86 SBCs (Intel N100)** — 4-8 GB RAM, snelle builds, native Docker ondersteuning
+
+Zie [Hardware Gids](docs/hardware/README.md) voor apparaat-specifieke setup instructies.
+
+## Tunneling (Publieke Blootstelling)
+
+Stel je lokale ZeroClaw daemon bloot aan het publieke netwerk via beveiligde tunnels:
+
+```bash
+zeroclaw tunnel start --provider cloudflare
+```
+
+Ondersteunde tunnel providers:
+
+- **Cloudflare Tunnel** — gratis HTTPS, geen poort blootstelling, multi-domein ondersteuning
+- **Ngrok** — snelle setup, aangepaste domeinen (betaald plan)
+- **Tailscale** — privé mesh netwerk, geen publieke poort
+
+Zie [Configuratie Referentie](docs/config-reference.md#tunnel) voor volledige configuratie-opties.
+
+## Beveiliging
+
+ZeroClaw implementeert meerdere beveiligingslagen:
+
+### Pairing
+
+De daemon genereert een pairing geheim bij de eerste lancering opgeslagen in `~/.zeroclaw/workspace/.pairing`. Clients (agent, CLI) moeten dit geheim presenteren om verbinding te maken.
+
+```bash
+zeroclaw pairing rotate # Genereert een nieuw geheim en invalideert het oude
+```
+
+### Sandboxing
+
+- **Docker Runtime** — volledige container isolatie met gescheiden bestandssystemen en netwerken
+- **Native Runtime** — draait als gebruikersproces, standaard scoped naar workspace
+
+### Allowlists
+
+Kanalen kunnen toegang beperken per user ID:
+
+```toml
+[channels.telegram]
+enabled = true
+allowed_users = [123456789, 987654321] # Expliciete allowlist
+```
+
+### Encryptie
+
+- **Matrix E2EE** — volledige end-to-end encryptie met apparaatverificatie
+- **TLS Transport** — alle API en tunnel verkeer gebruikt HTTPS/TLS
+
+Zie [Beveiligingsdocumentatie](docs/security/README.md) voor volledig beleid en praktijken.
+
+## Observeerbaarheid
+
+ZeroClaw logt naar `~/.zeroclaw/workspace/logs/` standaard. Logs worden per component opgeslagen:
+
+```
+~/.zeroclaw/workspace/logs/
+├── daemon.log # Daemon logs (startup, API verzoeken, fouten)
+├── agent.log # Agent logs (bericht routing, tool uitvoering)
+├── telegram.log # Kanaal-specifieke logs (indien ingeschakeld)
+└── matrix.log # Kanaal-specifieke logs (indien ingeschakeld)
+```
+
+### Logging Configuratie
+
+```toml
+[logging]
+level = "info" # debug, info, warn, error
+path = "~/.zeroclaw/workspace/logs/"
+rotation = "daily" # daily, hourly, size
+max_size_mb = 100 # Voor grootte-gebaseerde rotatie
+retention_days = 30 # Automatische opruiming na N dagen
+```
+
+Zie [Configuratie Referentie](docs/config-reference.md#logging) voor alle logging-opties.
+
+### Metrieken (Gepland)
+
+Prometheus metrieken ondersteuning voor productie monitoring komt binnenkort. Tracking in [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
+
+## Vaardigheden
+
+ZeroClaw ondersteunt aangepaste vaardigheden — herbruikbare modules die systeemmogelijkheden uitbreiden.
+
+### Vaardigheidsdefinitie
+
+Vaardigheden worden opgeslagen in `~/.zeroclaw/workspace/skills//` met deze structuur:
+
+```
+skills/
+└── my-skill/
+ ├── skill.toml # Vaardigheidsmetadata (naam, beschrijving, afhankelijkheden)
+ ├── prompt.md # Systeem prompt voor de AI
+ └── tools/ # Optionele aangepaste tools
+ └── my_tool.py
+```
+
+### Vaardigheidsvoorbeeld
+
+```toml
+# skills/web-research/skill.toml
+[skill]
+name = "web-research"
+description = "Zoekt op het web en vat resultaten samen"
+version = "1.0.0"
+
+[dependencies]
+tools = ["web_fetch", "bash"]
+```
+
+```markdown
+
+
+Je bent een onderzoeksassistent. Wanneer gevraagd wordt om iets te onderzoeken:
+
+1. Gebruik web_fetch om inhoud op te halen
+2. Vat resultaten samen in een gemakkelijk leesbaar formaat
+3. Citeer bronnen met URL's
+```
+
+### Vaardigheidsgebruik
+
+Vaardigheden worden automatisch geladen bij agent startup. Referentie ze bij naam in gesprekken:
+
+```
+Gebruiker: Gebruik de web-research vaardigheid om het laatste AI nieuws te vinden
+Bot: [laadt web-research vaardigheid, voert web_fetch uit, vat resultaten samen]
+```
+
+Zie [Vaardigheden](#vaardigheden) sectie voor volledige vaardigheidscreatie-instructies.
+
+## Open Skills
+
+ZeroClaw ondersteunt [Open Skills](https://github.com/openagents-com/open-skills) — een modulair en provider-agnostisch systeem voor het uitbreiden van AI-agent mogelijkheden.
+
+### Open Skills Inschakelen
+
+```toml
+[skills]
+open_skills_enabled = true
+# open_skills_dir = "/path/to/open-skills" # optioneel
+```
+
+Je kunt ook tijdens runtime overschrijven met `ZEROCLAW_OPEN_SKILLS_ENABLED` en `ZEROCLAW_OPEN_SKILLS_DIR`.
+
+## Ontwikkeling
+
+```bash
+cargo build # Dev build
+cargo build --release # Release build (codegen-units=1, werkt op alle apparaten inclusief Raspberry Pi)
+cargo build --profile release-fast # Snellere build (codegen-units=8, vereist 16 GB+ RAM)
+cargo test # Voer volledige test suite uit
+cargo clippy --locked --all-targets -- -D clippy::correctness
+cargo fmt # Formaat
+
+# Voer SQLite vs Markdown vergelijkingsbenchmark uit
+cargo test --test memory_comparison -- --nocapture
+```
+
+### Pre-push hook
+
+Een git hook voert `cargo fmt --check`, `cargo clippy -- -D warnings`, en `cargo test` uit voor elke push. Schakel het één keer in:
+
+```bash
+git config core.hooksPath .githooks
+```
+
+### Build Probleemoplossing (OpenSSL fouten op Linux)
+
+Als je een `openssl-sys` build fout tegenkomt, synchroniseer afhankelijkheden en compileer opnieuw met de repository's lockfile:
+
+```bash
+git pull
+cargo build --release --locked
+cargo install --path . --force --locked
+```
+
+ZeroClaw is geconfigureerd om `rustls` te gebruiken voor HTTP/TLS afhankelijkheden; `--locked` houdt de transitieve grafiek deterministisch in schone omgevingen.
+
+Om de hook over te slaan wanneer je een snelle push nodig hebt tijdens ontwikkeling:
+
+```bash
+git push --no-verify
+```
+
+## Samenwerking & Docs
+
+Begin met de documentatie hub voor een taak-gebaseerde kaart:
+
+- Documentatie Hub: [`docs/README.md`](docs/README.md)
+- Geünificeerde Docs TOC: [`docs/SUMMARY.md`](docs/SUMMARY.md)
+- Commando's Referentie: [`docs/commands-reference.md`](docs/commands-reference.md)
+- Configuratie Referentie: [`docs/config-reference.md`](docs/config-reference.md)
+- Providers Referentie: [`docs/providers-reference.md`](docs/providers-reference.md)
+- Kanalen Referentie: [`docs/channels-reference.md`](docs/channels-reference.md)
+- Operations Runbook: [`docs/operations-runbook.md`](docs/operations-runbook.md)
+- Probleemoplossing: [`docs/troubleshooting.md`](docs/troubleshooting.md)
+- Docs Inventaris/Classificatie: [`docs/docs-inventory.md`](docs/docs-inventory.md)
+- PR/Issue Triage Snapshot (vanaf 18 feb. 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
+
+Belangrijkste samenwerkingsreferenties:
+
+- Documentatie Hub: [docs/README.md](docs/README.md)
+- Documentatie Sjabloon: [docs/doc-template.md](docs/doc-template.md)
+- Documentatiewijziging Checklist: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
+- Kanaal Configuratie Referentie: [docs/channels-reference.md](docs/channels-reference.md)
+- Matrix Versleutelde Kamer Operations: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
+- Bijdrage Gids: [CONTRIBUTING.md](CONTRIBUTING.md)
+- PR Workflow Beleid: [docs/pr-workflow.md](docs/pr-workflow.md)
+- Reviewer Playbook (triage + diepgaande review): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
+- Eigendom en CI Triage Kaart: [docs/ci-map.md](docs/ci-map.md)
+- Beveiligingsopenbaarmaking Beleid: [SECURITY.md](SECURITY.md)
+
+Voor implementatie en runtime operaties:
+
+- Netwerk Implementatie Gids: [docs/network-deployment.md](docs/network-deployment.md)
+- Proxy Agent Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
+
+## ZeroClaw Ondersteunen
+
+Als ZeroClaw je werk helpt en je de doorlopende ontwikkeling wilt ondersteunen, kun je hier doneren:
+
+
+
+### 🙏 Speciale Dank
+
+Een oprechte dankjewel aan de gemeenschappen en instellingen die dit open-source werk inspireren en voeden:
+
+- **Harvard University** — voor het bevorderen van intellectuele nieuwsgierigheid en het verleggen van de grenzen van wat mogelijk is.
+- **MIT** — voor het verdedigen van open kennis, open source, en de overtuiging dat technologie toegankelijk moet zijn voor iedereen.
+- **Sundai Club** — voor de gemeenschap, energie, en de onophoudelijke wil om dingen te bouwen die ertoe doen.
+- **De Wereld en Verder** 🌍✨ — aan elke bijdrager, dromer, en bouwer daarbuiten die open source tot een kracht voor goed maakt. Dit is voor jou.
+
+We bouwen in open source omdat de beste ideeën van overal komen. Als je dit leest, ben je er deel van. Welkom. 🦀❤️
+
+## ⚠️ Officiële Repository en Implantatie Waarschuwing
+
+**Dit is de enige officiële ZeroClaw repository:**
+
+>
+
+Elke andere repository, organisatie, domein of pakket dat beweert "ZeroClaw" te zijn of affiniteit met ZeroClaw Labs suggereert is **niet-geautoriseerd en niet gelieerd aan dit project**. Bekende niet-geautoriseerde forks worden vermeld in [TRADEMARK.md](TRADEMARK.md).
+
+Als je imitatie of handelsmerk misbruik tegenkomt, [open dan een issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
+
+---
+
+## Licentie
+
+ZeroClaw is dubbel gelicentieerd voor maximale openheid en bijdrager bescherming:
+
+| Licentie | Gebruiksscenario's |
+| ---------------------------- | ------------------------------------------------------------ |
+| [MIT](LICENSE-MIT) | Open-source, onderzoek, academisch, persoonlijk gebruik |
+| [Apache 2.0](LICENSE-APACHE) | Patent bescherming, institutioneel, commerciële implementatie |
+
+Je kunt een van beide licenties kiezen. **Bijdragers verlenen automatisch rechten onder beide** — zie [CLA.md](CLA.md) voor de volledige bijdrager overeenkomst.
+
+### Handelsmerk
+
+De naam **ZeroClaw** en het logo zijn geregistreerde handelsmerken van ZeroClaw Labs. Deze licentie verleent geen toestemming om ze te gebruiken om goedkeuring of affiniteit te impliceren. Zie [TRADEMARK.md](TRADEMARK.md) voor toegestane en verboden gebruiksmogelijkheden.
+
+### Bijdrager Beschermingen
+
+- **Je behoudt auteursrechten** op je bijdragen
+- **Patent verlening** (Apache 2.0) beschermt je tegen patent claims door andere bijdragers
+- Je bijdragen worden **permanent toegeschreven** in de commit geschiedenis en [NOTICE](NOTICE)
+- Geen handelsmerk rechten worden overgedragen door bij te dragen
+
+## Bijdragen
+
+Zie [CONTRIBUTING.md](CONTRIBUTING.md) en [CLA.md](CLA.md). Implementeer een trait, dien een PR in:
+
+- CI workflow gids: [docs/ci-map.md](docs/ci-map.md)
+- Nieuwe `Provider` → `src/providers/`
+- Nieuw `Channel` → `src/channels/`
+- Nieuwe `Observer` → `src/observability/`
+- Nieuwe `Tool` → `src/tools/`
+- Nieuwe `Memory` → `src/memory/`
+- Nieuwe `Tunnel` → `src/tunnel/`
+- Nieuwe `Skill` → `~/.zeroclaw/workspace/skills//`
+
+---
+
+**ZeroClaw** — Nul overhead. Nul compromis. Implementeer overal. Wissel alles. 🦀
+
+## Sterren Geschiedenis
+
+
diff --git a/README.pl.md b/README.pl.md
new file mode 100644
index 000000000..520221c23
--- /dev/null
+++ b/README.pl.md
@@ -0,0 +1,914 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Zero narzutu. Zero kompromisów. 100% Rust. 100% Agnostyczny.
+ ⚡️ Działa na sprzęcie za $10 z <5MB RAM: To 99% mniej pamięci niż OpenClaw i 98% taniej niż Mac mini!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Zbudowany przez studentów i członków społeczności Harvard, MIT i Sundai.Club.
+
+ Szybka, lekka i w pełni autonomiczna infrastruktura asystenta AI
+ Wdrażaj wszędzie. Zamieniaj cokolwiek.
+
+
+
+ ZeroClaw to system operacyjny runtime dla workflow agentów — infrastruktura abstrahująca modele, narzędzia, pamięć i wykonanie do budowania agentów raz i uruchamiania ich wszędzie.
+
+
+
Architektura oparta na traitach · bezpieczny runtime domyślnie · wymienny dostawca/kanał/narzędzie · wszystko jest podłączalne
+
+### 📢 Ogłoszenia
+
+Użyj tej tabeli dla ważnych ogłoszeń (zmiany kompatybilności, powiadomienia bezpieczeństwa, okna serwisowe i blokady wersji).
+
+| Data (UTC) | Poziom | Ogłoszenie | Działanie |
+| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 2026-02-19 | _Krytyczny_ | **Nie jesteśmy powiązani** z `openagen/zeroclaw` lub `zeroclaw.org`. Domena `zeroclaw.org` obecnie wskazuje na fork `openagen/zeroclaw`, i ta domena/repozytorium podszywa się pod naszą oficjalną stronę/projekt. | Nie ufaj informacjom, plikom binarnym, zbiórkom funduszy lub ogłoszeniom z tych źródeł. Używaj tylko [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) i naszych zweryfikowanych kont społecznościowych. |
+| 2026-02-21 | _Ważne_ | Nasza oficjalna strona jest teraz online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Dziękujemy za cierpliwość podczas oczekiwania. Nadal wykrywamy próby podszywania się: nie uczestnicz w żadnej działalności inwestycyjnej/finansowej w imieniu ZeroClaw jeśli nie jest opublikowana przez nasze oficjalne kanały. | Używaj [tego repozytorium](https://github.com/zeroclaw-labs/zeroclaw) jako jedynego źródła prawdy. Śledź [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupa)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), i [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) dla oficjalnych aktualizacji. |
+| 2026-02-19 | _Ważne_ | Anthropic zaktualizował warunki używania uwierzytelniania i poświadczeń 2026-02-19. Uwierzytelnianie OAuth (Free, Pro, Max) jest wyłącznie dla Claude Code i Claude.ai; używanie tokenów OAuth Claude Free/Pro/Max w jakimkolwiek innym produkcie, narzędziu lub usłudze (w tym Agent SDK) nie jest dozwolone i może naruszać Warunki Użytkowania Konsumenta. | Prosimy tymczasowo unikać integracji OAuth Claude Code aby zapobiec potencjalnym stratom. Oryginalna klauzula: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
+
+### ✨ Funkcje
+
+- 🏎️ **Lekki Runtime Domyślnie:** Typowe workflow CLI i komendy statusu działają w przestrzeni pamięci kilku megabajtów w buildach produkcyjnych.
+- 💰 **Ekonomiczne Wdrażanie:** Zaprojektowane dla tanich płytek i małych instancji chmurowych bez ciężkich zależności runtime.
+- ⚡ **Szybkie Zimne Starty:** Runtime Rust pojedynczego binarium utrzymuje start komend i daemonów niemal natychmiastowy dla codziennych operacji.
+- 🌍 **Przenośna Architektura:** Pojedynczy workflow binarium na ARM, x86 i RISC-V z wymiennym dostawcą/kanałem/narzędziem.
+
+### Dlaczego zespoły wybierają ZeroClaw
+
+- **Lekki domyślnie:** mały binarium Rust, szybki start, niski ślad pamięci.
+- **Bezpieczny przez design:** parowanie, ścisłe sandboxowanie, jawne listy dozwolone, zakres workspace.
+- **Całkowicie wymienny:** systemy rdzenne to trait-y (dostawcy, kanały, narzędzia, pamięć, tunele).
+- **Brak blokady dostawcy:** wsparcie dostawcy kompatybilnego z OpenAI + podłączalne własne endpointy.
+
+## Snapshot Benchmark (ZeroClaw vs OpenClaw, Reprodukowalne)
+
+Szybki benchmark na maszynie lokalnej (macOS arm64, luty 2026) znormalizowany dla sprzętu edge 0.8 GHz.
+
+| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
+| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
+| **Język** | TypeScript | Python | Go | **Rust** |
+| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
+| **Start (rdzeń 0.8 GHz)** | > 500s | > 30s | < 1s | **< 10ms** |
+| **Rozmiar Binarny** | ~28 MB (dist) | N/A (Skrypty) | ~8 MB | **3.4 MB** |
+| **Koszt** | Mac Mini $599 | Linux SBC ~$50 | Płytka Linux $10 | **Dowolny sprzęt $10** |
+
+> Uwagi: Wyniki ZeroClaw są mierzone na buildach produkcyjnych używając `/usr/bin/time -l`. OpenClaw wymaga runtime Node.js (typowo ~390 MB dodatkowego narzutu pamięci), podczas gdy NanoBot wymaga runtime Python. PicoClaw i ZeroClaw to statyczne binaria. Powyższe liczby RAM to pamięć runtime; wymagania kompilacji w czasie build są wyższe.
+
+
+
+
+
+### Reprodukowalny Pomiar Lokalny
+
+Twierdzenia benchmark mogą się zmieniać wraz z ewolucją kodu i toolchainów, więc zawsze mierz swój aktualny build lokalnie:
+
+```bash
+cargo build --release
+ls -lh target/release/zeroclaw
+
+/usr/bin/time -l target/release/zeroclaw --help
+/usr/bin/time -l target/release/zeroclaw status
+```
+
+Przykładowa próbka (macOS arm64, zmierzone 18 lutego 2026):
+
+- Rozmiar binarium release: `8.8M`
+- `zeroclaw --help`: czas rzeczywisty ok. `0.02s`, szczytowy ślad pamięci ~`3.9 MB`
+- `zeroclaw status`: czas rzeczywisty ok. `0.01s`, szczytowy ślad pamięci ~`4.1 MB`
+
+## Wymagania Wstępne
+
+
+Windows
+
+### Windows — Wymagane
+
+1. **Visual Studio Build Tools** (dostarcza linker MSVC i Windows SDK):
+
+ ```powershell
+ winget install Microsoft.VisualStudio.2022.BuildTools
+ ```
+
+ Podczas instalacji (lub przez Visual Studio Installer), wybierz obciążenie **"Desktop development with C++"**.
+
+2. **Toolchain Rust:**
+
+ ```powershell
+ winget install Rustlang.Rustup
+ ```
+
+ Po instalacji, otwórz nowy terminal i uruchom `rustup default stable` aby upewnić się, że stabilny toolchain jest aktywny.
+
+3. **Zweryfikuj** że oba działają:
+ ```powershell
+ rustc --version
+ cargo --version
+ ```
+
+### Windows — Opcjonalne
+
+- **Docker Desktop** — wymagany tylko jeśli używasz [Docker sandboxed runtime](#aktualne-wsparcie-runtime) (`runtime.kind = "docker"`). Zainstaluj przez `winget install Docker.DockerDesktop`.
+
+
+
+
+Linux / macOS
+
+### Linux / macOS — Wymagane
+
+1. **Niezbędne narzędzia build:**
+ - **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
+ - **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
+ - **macOS:** Zainstaluj Xcode Command Line Tools: `xcode-select --install`
+
+2. **Toolchain Rust:**
+
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+
+ Zobacz [rustup.rs](https://rustup.rs) dla szczegółów.
+
+3. **Zweryfikuj:**
+ ```bash
+ rustc --version
+ cargo --version
+ ```
+
+### Linux / macOS — Opcjonalne
+
+- **Docker** — wymagany tylko jeśli używasz [Docker sandboxed runtime](#aktualne-wsparcie-runtime) (`runtime.kind = "docker"`).
+ - **Linux (Debian/Ubuntu):** zobacz [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
+ - **Linux (Fedora/RHEL):** zobacz [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
+ - **macOS:** zainstaluj Docker Desktop przez [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
+
+
+
+## Szybki Start
+
+### Opcja 1: Automatyczna konfiguracja (zalecana)
+
+Skrypt `bootstrap.sh` instaluje Rust, klonuje ZeroClaw, kompiluje go i konfiguruje twoje początkowe środowisko deweloperskie:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
+```
+
+To:
+
+1. Zainstaluje Rust (jeśli nieobecny)
+2. Sklonuje repozytorium ZeroClaw
+3. Skompiluje ZeroClaw w trybie release
+4. Zainstaluje `zeroclaw` w `~/.cargo/bin/`
+5. Utworzy domyślną strukturę workspace w `~/.zeroclaw/workspace/`
+6. Wygeneruje początkowy plik konfiguracyjny `~/.zeroclaw/workspace/config.toml`
+
+Po bootstrap, przeładuj swój shell lub uruchom `source ~/.cargo/env` aby używać komendy `zeroclaw` globalnie.
+
+### Opcja 2: Ręczna instalacja
+
+
+Kliknij aby zobaczyć kroki ręcznej instalacji
+
+```bash
+# 1. Sklonuj repozytorium
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# 2. Skompiluj w release
+cargo build --release --locked
+
+# 3. Zainstaluj binarium
+cargo install --path . --locked
+
+# 4. Zinicjuj workspace
+zeroclaw init
+
+# 5. Zweryfikuj instalację
+zeroclaw --version
+zeroclaw status
+```
+
+
+
+### Po Instalacji
+
+Po zainstalowaniu (przez bootstrap lub ręcznie), powinieneś widzieć:
+
+```
+~/.zeroclaw/workspace/
+├── config.toml # Główna konfiguracja
+├── .pairing # Sekrety parowania (generowane przy pierwszym uruchomieniu)
+├── logs/ # Logi daemon/agent
+├── skills/ # Własne umiejętności
+└── memory/ # Przechowywanie kontekstu konwersacji
+```
+
+**Następne kroki:**
+
+1. Skonfiguruj swoich dostawców AI w `~/.zeroclaw/workspace/config.toml`
+2. Sprawdź [referencje konfiguracji](docs/config-reference.md) dla opcji zaawansowanych
+3. Uruchom agenta: `zeroclaw agent start`
+4. Testuj przez preferowany kanał (zobacz [referencje kanałów](docs/channels-reference.md))
+
+## Konfiguracja
+
+Edytuj `~/.zeroclaw/workspace/config.toml` aby skonfigurować dostawców, kanały i zachowanie systemu.
+
+### Szybka Referencja Konfiguracji
+
+```toml
+[providers.anthropic]
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+api_key = "sk-..."
+model = "gpt-4o"
+
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@bot:matrix.org"
+password = "..."
+
+[memory]
+kind = "markdown" # lub "sqlite" lub "none"
+
+[runtime]
+kind = "native" # lub "docker" (wymaga Docker)
+```
+
+**Pełne dokumenty referencyjne:**
+
+- [Referencje Konfiguracji](docs/config-reference.md) — wszystkie ustawienia, walidacje, wartości domyślne
+- [Referencje Dostawców](docs/providers-reference.md) — konfiguracje specyficzne dla dostawców AI
+- [Referencje Kanałów](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord i więcej
+- [Operacje](docs/operations-runbook.md) — monitoring produkcyjny, rotacja sekretów, skalowanie
+
+### Aktualne Wsparcie Runtime
+
+ZeroClaw wspiera dwa backendy wykonania kodu:
+
+- **`native`** (domyślnie) — bezpośrednie wykonanie procesu, najszybsza ścieżka, idealna dla zaufanych środowisk
+- **`docker`** — pełna izolacja kontenera, wzmocnione polityki bezpieczeństwa, wymaga Docker
+
+Użyj `runtime.kind = "docker"` jeśli potrzebujesz ścisłego sandboxowania lub izolacji sieciowej. Zobacz [referencje konfiguracji](docs/config-reference.md#runtime) dla pełnych szczegółów.
+
+## Komendy
+
+```bash
+# Zarządzanie workspace
+zeroclaw init # Inicjuje nowy workspace
+zeroclaw status # Pokazuje status daemon/agent
+zeroclaw config validate # Weryfikuje składnię i wartości config.toml
+
+# Zarządzanie daemon
+zeroclaw daemon start # Uruchamia daemon w tle
+zeroclaw daemon stop # Zatrzymuje działający daemon
+zeroclaw daemon restart # Restartuje daemon (przeładowanie config)
+zeroclaw daemon logs # Pokazuje logi daemon
+
+# Zarządzanie agent
+zeroclaw agent start # Uruchamia agenta (wymaga działającego daemon)
+zeroclaw agent stop # Zatrzymuje agenta
+zeroclaw agent restart # Restartuje agenta (przeładowanie config)
+
+# Operacje parowania
+zeroclaw pairing init # Generuje nowy sekret parowania
+zeroclaw pairing rotate # Rotuje istniejący sekret parowania
+
+# Tunneling (dla publicznej ekspozycji)
+zeroclaw tunnel start # Uruchamia tunnel do lokalnego daemon
+zeroclaw tunnel stop # Zatrzymuje aktywny tunnel
+
+# Diagnostyka
+zeroclaw doctor # Uruchamia sprawdzenia zdrowia systemu
+zeroclaw version # Pokazuje wersję i informacje o build
+```
+
+Zobacz [Referencje Komend](docs/commands-reference.md) dla pełnych opcji i przykładów.
+
+## Architektura
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Kanały (trait) │
+│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Orchestrator Agent │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Routing │ │ Kontekst │ │ Wykonanie │ │
+│ │ Wiadomość │ │ Pamięć │ │ Narzędzie │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ┌───────────────┼───────────────┐
+ ▼ ▼ ▼
+┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+│ Dostawcy │ │ Pamięć │ │ Narzędzia │
+│ (trait) │ │ (trait) │ │ (trait) │
+├──────────────┤ ├──────────────┤ ├──────────────┤
+│ Anthropic │ │ Markdown │ │ Filesystem │
+│ OpenAI │ │ SQLite │ │ Bash │
+│ Gemini │ │ None │ │ Web Fetch │
+│ Ollama │ │ Custom │ │ Custom │
+│ Custom │ └──────────────┘ └──────────────┘
+└──────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Runtime (trait) │
+│ Native │ Docker │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+**Kluczowe zasady:**
+
+- Wszystko jest **trait** — dostawcy, kanały, narzędzia, pamięć, tunele
+- Kanały wywołują orchestrator; orchestrator wywołuje dostawców + narzędzia
+- System pamięci zarządza kontekstem konwersacji (markdown, SQLite, lub brak)
+- Runtime abstrahuje wykonanie kodu (natywny lub Docker)
+- Brak blokady dostawcy — zamieniaj Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama bez zmian kodu
+
+Zobacz [dokumentację architektury](docs/architecture.svg) dla szczegółowych diagramów i szczegółów implementacji.
+
+## Przykłady
+
+### Bot Telegram
+
+```toml
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+allowed_users = [987654321] # Twój Telegram user ID
+```
+
+Uruchom daemon + agent, a następnie wyślij wiadomość do swojego bota na Telegram:
+
+```
+/start
+Cześć! Czy mógłbyś pomóc mi napisać skrypt Python?
+```
+
+Bot odpowiada kodem wygenerowanym przez AI, wykonuje narzędzia jeśli wymagane i utrzymuje kontekst konwersacji.
+
+### Matrix (szyfrowanie end-to-end)
+
+```toml
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@zeroclaw:matrix.org"
+password = "..."
+device_name = "zeroclaw-prod"
+e2ee_enabled = true
+```
+
+Zaproś `@zeroclaw:matrix.org` do zaszyfrowanego pokoju, a bot odpowie z pełnym szyfrowaniem. Zobacz [Przewodnik Matrix E2EE](docs/matrix-e2ee-guide.md) dla konfiguracji weryfikacji urządzenia.
+
+### Multi-Dostawca
+
+```toml
+[providers.anthropic]
+enabled = true
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+enabled = true
+api_key = "sk-..."
+model = "gpt-4o"
+
+[orchestrator]
+default_provider = "anthropic"
+fallback_providers = ["openai"] # Failover przy błędzie dostawcy
+```
+
+Jeśli Anthropic zawiedzie lub ma rate-limit, orchestrator automatycznie przełącza się na OpenAI.
+
+### Własna Pamięć
+
+```toml
+[memory]
+kind = "sqlite"
+path = "~/.zeroclaw/workspace/memory/conversations.db"
+retention_days = 90 # Automatyczne czyszczenie po 90 dniach
+```
+
+Lub użyj Markdown dla przechowywania czytelnego dla ludzi:
+
+```toml
+[memory]
+kind = "markdown"
+path = "~/.zeroclaw/workspace/memory/"
+```
+
+Zobacz [Referencje Konfiguracji](docs/config-reference.md#memory) dla wszystkich opcji pamięci.
+
+## Wsparcie Dostawców
+
+| Dostawca | Status | API Key | Przykładowe Modele |
+| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
+| **Anthropic** | ✅ Stabilny | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
+| **OpenAI** | ✅ Stabilny | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
+| **Google Gemini** | ✅ Stabilny | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
+| **Ollama** | ✅ Stabilny | N/A (lokalny) | `llama3.3`, `qwen2.5`, `phi4` |
+| **Cerebras** | ✅ Stabilny | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
+| **Groq** | ✅ Stabilny | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
+| **Mistral** | 🚧 Planowany | `MISTRAL_API_KEY` | TBD |
+| **Cohere** | 🚧 Planowany | `COHERE_API_KEY` | TBD |
+
+### Własne Endpointy
+
+ZeroClaw wspiera endpointy kompatybilne z OpenAI:
+
+```toml
+[providers.custom]
+enabled = true
+api_key = "..."
+base_url = "https://api.your-llm-provider.com/v1"
+model = "your-model-name"
+```
+
+Przykład: użyj [LiteLLM](https://github.com/BerriAI/litellm) jako proxy aby uzyskać dostęp do każdego LLM przez interfejs OpenAI.
+
+Zobacz [Referencje Dostawców](docs/providers-reference.md) dla pełnych szczegółów konfiguracji.
+
+## Wsparcie Kanałów
+
+| Kanał | Status | Uwierzytelnianie | Uwagi |
+| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
+| **Telegram** | ✅ Stabilny | Bot Token | Pełne wsparcie w tym pliki, obrazy, przyciski inline |
+| **Matrix** | ✅ Stabilny | Hasło lub Token | Wsparcie E2EE z weryfikacją urządzenia |
+| **Slack** | 🚧 Planowany | OAuth lub Bot Token | Wymaga dostępu do workspace |
+| **Discord** | 🚧 Planowany | Bot Token | Wymaga uprawnień guild |
+| **WhatsApp** | 🚧 Planowany | Twilio lub oficjalne API | Wymaga konta business |
+| **CLI** | ✅ Stabilny | Brak | Bezpośredni interfejs konwersacyjny |
+| **Web** | 🚧 Planowany | API Key lub OAuth | Interfejs czatu oparty na przeglądarce |
+
+Zobacz [Referencje Kanałów](docs/channels-reference.md) dla pełnych instrukcji konfiguracji.
+
+## Wsparcie Narzędzi
+
+ZeroClaw dostarcza wbudowane narzędzia do wykonania kodu, dostępu do systemu plików i pobierania web:
+
+| Narzędzie | Opis | Wymagany Runtime |
+| -------------------- | --------------------------- | ----------------------------- |
+| **bash** | Wykonuje komendy shell | Natywny lub Docker |
+| **python** | Wykonuje skrypty Python | Python 3.8+ (natywny) lub Docker |
+| **javascript** | Wykonuje kod Node.js | Node.js 18+ (natywny) lub Docker |
+| **filesystem_read** | Odczytuje pliki | Natywny lub Docker |
+| **filesystem_write** | Zapisuje pliki | Natywny lub Docker |
+| **web_fetch** | Pobiera treści web | Natywny lub Docker |
+
+### Bezpieczeństwo Wykonania
+
+- **Natywny Runtime** — działa jako proces użytkownika daemon, pełny dostęp do systemu plików
+- **Docker Runtime** — pełna izolacja kontenera, oddzielne systemy plików i sieci
+
+Skonfiguruj politykę wykonania w `config.toml`:
+
+```toml
+[runtime]
+kind = "docker"
+allowed_tools = ["bash", "python", "filesystem_read"] # Jawna lista dozwolona
+```
+
+Zobacz [Referencje Konfiguracji](docs/config-reference.md#runtime) dla pełnych opcji bezpieczeństwa.
+
+## Wdrażanie
+
+### Lokalne Wdrażanie (Rozwój)
+
+```bash
+zeroclaw daemon start
+zeroclaw agent start
+```
+
+### Serwerowe Wdrażanie (Produkcja)
+
+Użyj systemd do zarządzania daemon i agent jako usługi:
+
+```bash
+# Zainstaluj binarium
+cargo install --path . --locked
+
+# Skonfiguruj workspace
+zeroclaw init
+
+# Utwórz pliki usług systemd
+sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
+sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
+
+# Włącz i uruchom usługi
+sudo systemctl enable zeroclaw-daemon zeroclaw-agent
+sudo systemctl start zeroclaw-daemon zeroclaw-agent
+
+# Zweryfikuj status
+sudo systemctl status zeroclaw-daemon
+sudo systemctl status zeroclaw-agent
+```
+
+Zobacz [Przewodnik Wdrażania Sieciowego](docs/network-deployment.md) dla pełnych instrukcji wdrażania produkcyjnego.
+
+### Docker
+
+```bash
+# Zbuduj obraz
+docker build -t zeroclaw:latest .
+
+# Uruchom kontener
+docker run -d \
+ --name zeroclaw \
+ -v ~/.zeroclaw/workspace:/workspace \
+ -e ANTHROPIC_API_KEY=sk-ant-... \
+ zeroclaw:latest
+```
+
+Zobacz [`Dockerfile`](Dockerfile) dla szczegółów budowania i opcji konfiguracji.
+
+### Sprzęt Edge
+
+ZeroClaw jest zaprojektowany do działania na sprzęcie niskiego poboru mocy:
+
+- **Raspberry Pi Zero 2 W** — ~512 MB RAM, pojedynczy rdzeń ARMv8, < $5 koszt sprzętu
+- **Raspberry Pi 4/5** — 1 GB+ RAM, wielordzeniowy, idealny dla równoczesnych obciążeń
+- **Orange Pi Zero 2** — ~512 MB RAM, czterordzeniowy ARMv8, ultra-niski koszt
+- **SBC x86 (Intel N100)** — 4-8 GB RAM, szybkie buildy, natywne wsparcie Docker
+
+Zobacz [Przewodnik Sprzętowy](docs/hardware/README.md) dla instrukcji konfiguracji specyficznych dla urządzenia.
+
+## Tunneling (Publiczna Ekspozycja)
+
+Exponuj swoj lokalny daemon ZeroClaw do sieci publicznej przez bezpieczne tunele:
+
+```bash
+zeroclaw tunnel start --provider cloudflare
+```
+
+Wspierani dostawcy tunnel:
+
+- **Cloudflare Tunnel** — darmowy HTTPS, brak ekspozycji portów, wsparcie multi-domenowe
+- **Ngrok** — szybka konfiguracja, własne domeny (plan płatny)
+- **Tailscale** — prywatna sieć mesh, brak publicznego portu
+
+Zobacz [Referencje Konfiguracji](docs/config-reference.md#tunnel) dla pełnych opcji konfiguracji.
+
+## Bezpieczeństwo
+
+ZeroClaw implementuje wiele warstw bezpieczeństwa:
+
+### Parowanie
+
+Daemon generuje sekret parowania przy pierwszym uruchomieniu przechowywany w `~/.zeroclaw/workspace/.pairing`. Klienci (agent, CLI) muszą przedstawić ten sekret aby się połączyć.
+
+```bash
+zeroclaw pairing rotate # Generuje nowy sekret i unieważnia stary
+```
+
+### Sandbox
+
+- **Docker Runtime** — pełna izolacja kontenera z oddzielnymi systemami plików i sieciami
+- **Natywny Runtime** — działa jako proces użytkownika, domyślnie ograniczony do workspace
+
+### Listy Dozwolone
+
+Kanały mogą ograniczać dostęp po ID użytkownika:
+
+```toml
+[channels.telegram]
+enabled = true
+allowed_users = [123456789, 987654321] # Jawna lista dozwolona
+```
+
+### Szyfrowanie
+
+- **Matrix E2EE** — pełne szyfrowanie end-to-end z weryfikacją urządzenia
+- **Transport TLS** — cały ruch API i tunnel używa HTTPS/TLS
+
+Zobacz [Dokumentację Bezpieczeństwa](docs/security/README.md) dla pełnych polityk i praktyk.
+
+## Obserwowalność
+
+ZeroClaw loguje do `~/.zeroclaw/workspace/logs/` domyślnie. Logi są przechowywane po komponentach:
+
+```
+~/.zeroclaw/workspace/logs/
+├── daemon.log # Logi daemon (startup, żądania API, błędy)
+├── agent.log # Logi agent (routing wiadomości, wykonanie narzędzi)
+├── telegram.log # Logi specyficzne dla kanału (jeśli włączone)
+└── matrix.log # Logi specyficzne dla kanału (jeśli włączone)
+```
+
+### Konfiguracja Logowania
+
+```toml
+[logging]
+level = "info" # debug, info, warn, error
+path = "~/.zeroclaw/workspace/logs/"
+rotation = "daily" # daily, hourly, size
+max_size_mb = 100 # Dla rotacji opartej na rozmiarze
+retention_days = 30 # Automatyczne czyszczenie po N dniach
+```
+
+Zobacz [Referencje Konfiguracji](docs/config-reference.md#logging) dla wszystkich opcji logowania.
+
+### Metryki (Planowane)
+
+Wsparcie metryk Prometheus dla monitoringu produkcyjnego wkrótce. Śledzenie w [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
+
+## Umiejętności
+
+ZeroClaw wspiera własne umiejętności — wielokrotnego użytku moduły rozszerzające możliwości systemu.
+
+### Definicja Umiejętności
+
+Umiejętności są przechowywane w `~/.zeroclaw/workspace/skills//` z tą strukturą:
+
+```
+skills/
+└── my-skill/
+ ├── skill.toml # Metadane umiejętności (nazwa, opis, zależności)
+ ├── prompt.md # Prompt systemowy dla AI
+ └── tools/ # Opcjonalne własne narzędzia
+ └── my_tool.py
+```
+
+### Przykład Umiejętności
+
+```toml
+# skills/web-research/skill.toml
+[skill]
+name = "web-research"
+description = "Szuka w web i podsumowuje wyniki"
+version = "1.0.0"
+
+[dependencies]
+tools = ["web_fetch", "bash"]
+```
+
+```markdown
+
+
+Jesteś asystentem badawczym. Kiedy proszą o zbadanie czegoś:
+
+1. Użyj web_fetch aby pobrać treść
+2. Podsumuj wyniki w łatwym do czytania formacie
+3. Zacytuj źródła z URL-ami
+```
+
+### Użycie Umiejętności
+
+Umiejętności są automatycznie ładowane przy starcie agenta. Odwołuj się do nich po nazwie w konwersacjach:
+
+```
+Użytkownik: Użyj umiejętności web-research aby znaleźć najnowsze wiadomości AI
+Bot: [ładuje umiejętność web-research, wykonuje web_fetch, podsumowuje wyniki]
+```
+
+Zobacz sekcję [Umiejętności](#umiejętności) dla pełnych instrukcji tworzenia umiejętności.
+
+## Open Skills
+
+ZeroClaw wspiera [Open Skills](https://github.com/openagents-com/open-skills) — modułowy i agnostyczny względem dostawcy system do rozszerzania możliwości agentów AI.
+
+### Włącz Open Skills
+
+```toml
+[skills]
+open_skills_enabled = true
+# open_skills_dir = "/path/to/open-skills" # opcjonalne
+```
+
+Możesz też nadpisać w runtime używając `ZEROCLAW_OPEN_SKILLS_ENABLED` i `ZEROCLAW_OPEN_SKILLS_DIR`.
+
+## Rozwój
+
+```bash
+cargo build # Build deweloperski
+cargo build --release # Build release (codegen-units=1, działa na wszystkich urządzeniach w tym Raspberry Pi)
+cargo build --profile release-fast # Szybszy build (codegen-units=8, wymaga 16 GB+ RAM)
+cargo test # Uruchom pełny zestaw testów
+cargo clippy --locked --all-targets -- -D clippy::correctness
+cargo fmt # Formatowanie
+
+# Uruchom benchmark porównawczy SQLite vs Markdown
+cargo test --test memory_comparison -- --nocapture
+```
+
+### Hook pre-push
+
+Hook git uruchamia `cargo fmt --check`, `cargo clippy -- -D warnings`, i `cargo test` przed każdym push. Włącz go raz:
+
+```bash
+git config core.hooksPath .githooks
+```
+
+### Rozwiązywanie Problemów Build (błędy OpenSSL na Linux)
+
+Jeśli napotkasz błąd build `openssl-sys`, zsynchronizuj zależności i przekompiluj z lockfile repozytorium:
+
+```bash
+git pull
+cargo build --release --locked
+cargo install --path . --force --locked
+```
+
+ZeroClaw jest skonfigurowany do używania `rustls` dla zależności HTTP/TLS; `--locked` utrzymuje graf przechodni deterministyczny w czystych środowiskach.
+
+Aby pominąć hook gdy potrzebujesz szybkiego push podczas rozwoju:
+
+```bash
+git push --no-verify
+```
+
+## Współpraca i Docs
+
+Zacznij od centrum dokumentacji dla mapy opartej na zadaniach:
+
+- Centrum Dokumentacji: [`docs/README.md`](docs/README.md)
+- Zunifikowany Spis Treści Docs: [`docs/SUMMARY.md`](docs/SUMMARY.md)
+- Referencje Komend: [`docs/commands-reference.md`](docs/commands-reference.md)
+- Referencje Konfiguracji: [`docs/config-reference.md`](docs/config-reference.md)
+- Referencje Dostawców: [`docs/providers-reference.md`](docs/providers-reference.md)
+- Referencje Kanałów: [`docs/channels-reference.md`](docs/channels-reference.md)
+- Runbook Operacji: [`docs/operations-runbook.md`](docs/operations-runbook.md)
+- Rozwiązywanie Problemów: [`docs/troubleshooting.md`](docs/troubleshooting.md)
+- Inwentarz/Klasyfikacja Docs: [`docs/docs-inventory.md`](docs/docs-inventory.md)
+- Snapshot Triages PR/Issue (stan na 18 lutego 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
+
+Główne referencje współpracy:
+
+- Centrum Dokumentacji: [docs/README.md](docs/README.md)
+- Szablon Dokumentacji: [docs/doc-template.md](docs/doc-template.md)
+- Checklist Zmiany Dokumentacji: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
+- Referencje Konfiguracji Kanałów: [docs/channels-reference.md](docs/channels-reference.md)
+- Operacje Zaszyfrowanych Pokoi Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
+- Przewodnik Wkładu: [CONTRIBUTING.md](CONTRIBUTING.md)
+- Polityka Workflow PR: [docs/pr-workflow.md](docs/pr-workflow.md)
+- Playbook Recenzenta (triage + głęboka recenzja): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
+- Mapa Własności i Triages CI: [docs/ci-map.md](docs/ci-map.md)
+- Polityka Ujawnienia Bezpieczeństwa: [SECURITY.md](SECURITY.md)
+
+Dla wdrażania i operacji runtime:
+
+- Przewodnik Wdrażania Sieciowego: [docs/network-deployment.md](docs/network-deployment.md)
+- Playbook Proxy Agent: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
+
+## Wspieraj ZeroClaw
+
+Jeśli ZeroClaw pomaga twojej pracy i chcesz wspierać ciągły rozwój, możesz przekazać darowiznę tutaj:
+
+
+
+### 🙏 Specjalne Podziękowania
+
+Serdeczne podziękowania dla społeczności i instytucji które inspirują i zasilają tę pracę open-source:
+
+- **Harvard University** — za promowanie intelektualnej ciekawości i przesuwanie granic tego co możliwe.
+- **MIT** — za obronę otwartej wiedzy, open source, i przekonania że technologia powinna być dostępna dla wszystkich.
+- **Sundai Club** — za społeczność, energię, i nieustanną wolę budowania rzeczy które mają znaczenie.
+- **Świat i Dalej** 🌍✨ — dla każdego kontrybutora, marzyciela, i budowniczego tam na zewnątrz który czyni open source siłą dla dobra. To dla ciebie.
+
+Budujemy w open source ponieważ najlepsze pomysły przychodzą zewsząd. Jeśli to czytasz, jesteś tego częścią. Witamy. 🦀❤️
+
+## ⚠️ Oficjalne Repozytorium i Ostrzeżenie o Podszywaniu Się
+
+**To jest jedyne oficjalne repozytorium ZeroClaw:**
+
+>
+
+Jakiekolwiek inne repozytorium, organizacja, domena lub pakiet twierdzący że jest "ZeroClaw" lub sugerujący powiązanie z ZeroClaw Labs jest **nieautoryzowany i niepowiązany z tym projektem**. Znane nieautoryzowane forki będą wymienione w [TRADEMARK.md](TRADEMARK.md).
+
+Jeśli napotkasz podszywanie się lub nadużycie znaku towarowego, proszę [otwórz issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
+
+---
+
+## Licencja
+
+ZeroClaw jest podwójnie licencjonowany dla maksymalnej otwartości i ochrony kontrybutorów:
+
+| Licencja | Przypadki Użycia |
+| ---------------------------- | ------------------------------------------------------------ |
+| [MIT](LICENSE-MIT) | Open-source, badania, akademicki, użycie osobiste |
+| [Apache 2.0](LICENSE-APACHE) | Ochrona patentowa, instytucjonalne, wdrożenie komercyjne |
+
+Możesz wybrać jedną z licencji. **Kontrybutorzy automatycznie przyznają prawa pod obiema** — zobacz [CLA.md](CLA.md) dla pełnej umowy kontrybutora.
+
+### Znak Towarowy
+
+Nazwa **ZeroClaw** i logo są zarejestrowanymi znakami towarowymi ZeroClaw Labs. Ta licencja nie przyznaje pozwolenia na ich używanie do sugerowania poparcia lub powiązania. Zobacz [TRADEMARK.md](TRADEMARK.md) dla dozwolonych i zabronionych użyć.
+
+### Ochrony Kontrybutorów
+
+- **Zachowuj prawa autorskie** swoich wkładów
+- **Grant patentowy** (Apache 2.0) chroni cię przed roszczeniami patentowymi innych kontrybutorów
+- Twoje wkłady są **trwale przypisane** w historii commitów i [NOTICE](NOTICE)
+- Żadne prawa znaku towarowego nie są przenoszone przez kontrybucję
+
+## Wkład
+
+Zobacz [CONTRIBUTING.md](CONTRIBUTING.md) i [CLA.md](CLA.md). Zaimplementuj trait, prześlij PR:
+
+- Przewodnik workflow CI: [docs/ci-map.md](docs/ci-map.md)
+- Nowy `Provider` → `src/providers/`
+- Nowy `Channel` → `src/channels/`
+- Nowy `Observer` → `src/observability/`
+- Nowe `Tool` → `src/tools/`
+- Nowa `Memory` → `src/memory/`
+- Nowy `Tunnel` → `src/tunnel/`
+- Nowa `Skill` → `~/.zeroclaw/workspace/skills//`
+
+---
+
+**ZeroClaw** — Zero narzutu. Zero kompromisów. Wdrażaj wszędzie. Zamieniaj cokolwiek. 🦀
+
+## Historia Gwiazdek
+
+
diff --git a/README.pt.md b/README.pt.md
new file mode 100644
index 000000000..d1313cf59
--- /dev/null
+++ b/README.pt.md
@@ -0,0 +1,914 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Zero sobrecarga. Zero compromisso. 100% Rust. 100% Agnóstico.
+ ⚡️ Roda em hardware de $10 com <5MB de RAM: Isso é 99% menos memória que o OpenClaw e 98% mais barato que um Mac mini!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Construído por estudantes e membros das comunidades Harvard, MIT e Sundai.Club.
+
+ Infraestrutura de assistente AI rápida, leve e totalmente autônoma
+ Implante em qualquer lugar. Troque qualquer coisa.
+
+
+
+ ZeroClaw é o sistema operacional de runtime para fluxos de trabalho de agentes — uma infraestrutura que abstrai modelos, ferramentas, memória e execução para construir agentes uma vez e executá-los em qualquer lugar.
+
+
+
Arquitetura baseada em traits · runtime seguro por padrão · provedor/canal/ferramenta intercambiáveis · tudo é conectável
+
+### 📢 Anúncios
+
+Use esta tabela para avisos importantes (mudanças de compatibilidade, avisos de segurança, janelas de manutenção e bloqueios de versão).
+
+| Data (UTC) | Nível | Aviso | Ação |
+| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 2026-02-19 | _Crítico_ | **Não somos afiliados** ao `openagen/zeroclaw` ou `zeroclaw.org`. O domínio `zeroclaw.org` atualmente aponta para o fork `openagen/zeroclaw`, e este domínio/repositório está falsificando nosso site/projeto oficial. | Não confie em informações, binários, arrecadações ou anúncios dessas fontes. Use apenas [este repositório](https://github.com/zeroclaw-labs/zeroclaw) e nossas contas sociais verificadas. |
+| 2026-02-21 | _Importante_ | Nosso site oficial agora está online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Obrigado pela paciência durante a espera. Ainda detectamos tentativas de falsificação: não participe de nenhuma atividade de investimento/financiamento em nome do ZeroClaw se não for publicada através de nossos canais oficiais. | Use [este repositório](https://github.com/zeroclaw-labs/zeroclaw) como a única fonte de verdade. Siga [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), e [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para atualizações oficiais. |
+| 2026-02-19 | _Importante_ | A Anthropic atualizou os termos de uso de autenticação e credenciais em 2026-02-19. A autenticação OAuth (Free, Pro, Max) é exclusivamente para Claude Code e Claude.ai; o uso de tokens OAuth do Claude Free/Pro/Max em qualquer outro produto, ferramenta ou serviço (incluindo Agent SDK) não é permitido e pode violar os Termos de Uso do Consumidor. | Por favor, evite temporariamente as integrações OAuth do Claude Code para prevenir qualquer perda potencial. Cláusula original: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
+
+### ✨ Funcionalidades
+
+- 🏎️ **Runtime Leve por Padrão:** Fluxos de trabalho CLI comuns e comandos de status rodam dentro de um espaço de memória de poucos megabytes em builds de produção.
+- 💰 **Implantação Econômica:** Projetado para placas de baixo custo e pequenas instâncias cloud sem dependências de runtime pesadas.
+- ⚡ **Inícios a Frio Rápidos:** O runtime Rust de binário único mantém o início de comandos e daemons quase instantâneo para operações diárias.
+- 🌍 **Arquitetura Portátil:** Um fluxo de trabalho de binário único em ARM, x86 e RISC-V com provedor/canal/ferramenta intercambiáveis.
+
+### Por que as equipes escolhem o ZeroClaw
+
+- **Leve por padrão:** binário Rust pequeno, início rápido, baixa pegada de memória.
+- **Seguro por design:** emparelhamento, sandboxing estrito, listas de permissão explícitas, escopo de workspace.
+- **Totalmente intercambiável:** os sistemas principais são traits (provedores, canais, ferramentas, memória, túneis).
+- **Sem lock-in de provedor:** suporte de provedor compatível com OpenAI + endpoints personalizados conectáveis.
+
+## Instantâneo de Benchmark (ZeroClaw vs OpenClaw, Reproduzível)
+
+Benchmark rápido em máquina local (macOS arm64, fev. 2026) normalizado para hardware edge de 0.8 GHz.
+
+| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
+| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
+| **Linguagem** | TypeScript | Python | Go | **Rust** |
+| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
+| **Início (núcleo 0.8 GHz)** | > 500s | > 30s | < 1s | **< 10ms** |
+| **Tamanho Binário** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
+| **Custo** | Mac Mini $599 | Linux SBC ~$50 | Placa Linux $10 | **Qualquer hardware $10** |
+
+> Notas: Os resultados do ZeroClaw são medidos em builds de produção usando `/usr/bin/time -l`. O OpenClaw requer o runtime Node.js (tipicamente ~390 MB de sobrecarga de memória adicional), enquanto o NanoBot requer o runtime Python. PicoClaw e ZeroClaw são binários estáticos. As cifras de RAM acima são memória de runtime; os requisitos de compilação em tempo de build são maiores.
+
+
+
+
+
+### Medição Local Reproduzível
+
+As alegações de benchmark podem derivar à medida que o código e as toolchains evoluem, então sempre meça seu build atual localmente:
+
+```bash
+cargo build --release
+ls -lh target/release/zeroclaw
+
+/usr/bin/time -l target/release/zeroclaw --help
+/usr/bin/time -l target/release/zeroclaw status
+```
+
+Exemplo de amostra (macOS arm64, medido em 18 de fevereiro de 2026):
+
+- Tamanho do binário release: `8.8M`
+- `zeroclaw --help`: tempo real aprox `0.02s`, pegada de memória máxima ~`3.9 MB`
+- `zeroclaw status`: tempo real aprox `0.01s`, pegada de memória máxima ~`4.1 MB`
+
+## Pré-requisitos
+
+
+Windows
+
+### Windows — Obrigatório
+
+1. **Visual Studio Build Tools** (fornece o linker MSVC e o Windows SDK):
+
+ ```powershell
+ winget install Microsoft.VisualStudio.2022.BuildTools
+ ```
+
+ Durante a instalação (ou via Visual Studio Installer), selecione a carga de trabalho **"Desenvolvimento Desktop com C++"**.
+
+2. **Toolchain Rust:**
+
+ ```powershell
+ winget install Rustlang.Rustup
+ ```
+
+ Após a instalação, abra um novo terminal e execute `rustup default stable` para garantir que a toolchain estável esteja ativa.
+
+3. **Verifique** que ambos funcionam:
+ ```powershell
+ rustc --version
+ cargo --version
+ ```
+
+### Windows — Opcional
+
+- **Docker Desktop** — obrigatório apenas se você usar o [runtime Docker sandboxed](#suporte-de-runtime-atual) (`runtime.kind = "docker"`). Instale via `winget install Docker.DockerDesktop`.
+
+
+
+
+Linux / macOS
+
+### Linux / macOS — Obrigatório
+
+1. **Ferramentas de build essenciais:**
+ - **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
+ - **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
+ - **macOS:** Instale as Xcode Command Line Tools: `xcode-select --install`
+
+2. **Toolchain Rust:**
+
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+
+ Veja [rustup.rs](https://rustup.rs) para detalhes.
+
+3. **Verifique:**
+ ```bash
+ rustc --version
+ cargo --version
+ ```
+
+### Linux / macOS — Opcional
+
+- **Docker** — obrigatório apenas se você usar o [runtime Docker sandboxed](#suporte-de-runtime-atual) (`runtime.kind = "docker"`).
+ - **Linux (Debian/Ubuntu):** veja [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
+ - **Linux (Fedora/RHEL):** veja [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
+ - **macOS:** instale o Docker Desktop via [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
+
+
+
+## Início Rápido
+
+### Opção 1: Configuração automatizada (recomendada)
+
+O script `bootstrap.sh` instala Rust, clona ZeroClaw, compila, e configura seu ambiente de desenvolvimento inicial:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
+```
+
+Isso vai:
+
+1. Instalar Rust (se não presente)
+2. Clonar o repositório ZeroClaw
+3. Compilar ZeroClaw em modo release
+4. Instalar `zeroclaw` em `~/.cargo/bin/`
+5. Criar a estrutura de workspace padrão em `~/.zeroclaw/workspace/`
+6. Gerar um arquivo de configuração inicial `~/.zeroclaw/workspace/config.toml`
+
+Após o bootstrap, recarregue seu shell ou execute `source ~/.cargo/env` para usar o comando `zeroclaw` globalmente.
+
+### Opção 2: Instalação manual
+
+
+Clique para ver os passos de instalação manual
+
+```bash
+# 1. Clone o repositório
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# 2. Compile em release
+cargo build --release --locked
+
+# 3. Instale o binário
+cargo install --path . --locked
+
+# 4. Inicialize o workspace
+zeroclaw init
+
+# 5. Verifique a instalação
+zeroclaw --version
+zeroclaw status
+```
+
+
+
+### Após a instalação
+
+Uma vez instalado (via bootstrap ou manualmente), você deve ver:
+
+```
+~/.zeroclaw/workspace/
+├── config.toml # Configuração principal
+├── .pairing # Segredos de emparelhamento (gerado no primeiro início)
+├── logs/ # Logs de daemon/agent
+├── skills/ # Habilidades personalizadas
+└── memory/ # Armazenamento de contexto conversacional
+```
+
+**Próximos passos:**
+
+1. Configure seus provedores de AI em `~/.zeroclaw/workspace/config.toml`
+2. Confira a [referência de configuração](docs/config-reference.md) para opções avançadas
+3. Inicie o agente: `zeroclaw agent start`
+4. Teste via seu canal preferido (veja [referência de canais](docs/channels-reference.md))
+
+## Configuração
+
+Edite `~/.zeroclaw/workspace/config.toml` para configurar provedores, canais e comportamento do sistema.
+
+### Referência de Configuração Rápida
+
+```toml
+[providers.anthropic]
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+api_key = "sk-..."
+model = "gpt-4o"
+
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@bot:matrix.org"
+password = "..."
+
+[memory]
+kind = "markdown" # ou "sqlite" ou "none"
+
+[runtime]
+kind = "native" # ou "docker" (requer Docker)
+```
+
+**Documentos de referência completos:**
+
+- [Referência de Configuração](docs/config-reference.md) — todas as configurações, validações, valores padrão
+- [Referência de Provedores](docs/providers-reference.md) — configurações específicas de provedores de AI
+- [Referência de Canais](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord e mais
+- [Operações](docs/operations-runbook.md) — monitoramento em produção, rotação de segredos, escalonamento
+
+### Suporte de Runtime (atual)
+
+ZeroClaw suporta dois backends de execução de código:
+
+- **`native`** (padrão) — execução de processo direta, caminho mais rápido, ideal para ambientes confiáveis
+- **`docker`** — isolamento completo de container, políticas de segurança reforçadas, requer Docker
+
+Use `runtime.kind = "docker"` se você precisar de sandboxing estrito ou isolamento de rede. Veja [referência de configuração](docs/config-reference.md#runtime) para detalhes completos.
+
+## Comandos
+
+```bash
+# Gestão de workspace
+zeroclaw init # Inicializa um novo workspace
+zeroclaw status # Mostra status de daemon/agent
+zeroclaw config validate # Verifica sintaxe e valores do config.toml
+
+# Gestão de daemon
+zeroclaw daemon start # Inicia o daemon em segundo plano
+zeroclaw daemon stop # Para o daemon em execução
+zeroclaw daemon restart # Reinicia o daemon (recarga de config)
+zeroclaw daemon logs # Mostra logs do daemon
+
+# Gestão de agent
+zeroclaw agent start # Inicia o agent (requer daemon rodando)
+zeroclaw agent stop # Para o agent
+zeroclaw agent restart # Reinicia o agent (recarga de config)
+
+# Operações de emparelhamento
+zeroclaw pairing init # Gera um novo segredo de emparelhamento
+zeroclaw pairing rotate # Rotaciona o segredo de emparelhamento existente
+
+# Tunneling (para exposição pública)
+zeroclaw tunnel start # Inicia um tunnel para o daemon local
+zeroclaw tunnel stop # Para o tunnel ativo
+
+# Diagnóstico
+zeroclaw doctor # Executa verificações de saúde do sistema
+zeroclaw version # Mostra versão e informações de build
+```
+
+Veja [Referência de Comandos](docs/commands-reference.md) para opções e exemplos completos.
+
+## Arquitetura
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Canais (trait) │
+│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Orquestrador Agent │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Roteamento │ │ Contexto │ │ Execução │ │
+│ │ Mensagem │ │ Memória │ │ Ferramenta │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ┌───────────────┼───────────────┐
+ ▼ ▼ ▼
+┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+│ Provedores │ │ Memória │ │ Ferramentas │
+│ (trait) │ │ (trait) │ │ (trait) │
+├──────────────┤ ├──────────────┤ ├──────────────┤
+│ Anthropic │ │ Markdown │ │ Filesystem │
+│ OpenAI │ │ SQLite │ │ Bash │
+│ Gemini │ │ None │ │ Web Fetch │
+│ Ollama │ │ Custom │ │ Custom │
+│ Custom │ └──────────────┘ └──────────────┘
+└──────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Runtime (trait) │
+│ Native │ Docker │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+**Princípios chave:**
+
+- Tudo é um **trait** — provedores, canais, ferramentas, memória, túneis
+- Canais chamam o orquestrador; o orquestrador chama provedores + ferramentas
+- O sistema de memória gerencia contexto conversacional (markdown, SQLite, ou nenhum)
+- O runtime abstrai a execução de código (nativo ou Docker)
+- Sem lock-in de provedor — troque Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama sem mudanças de código
+
+Veja [documentação de arquitetura](docs/architecture.svg) para diagramas detalhados e detalhes de implementação.
+
+## Exemplos
+
+### Bot do Telegram
+
+```toml
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+allowed_users = [987654321] # Seu ID de usuário do Telegram
+```
+
+Inicie o daemon + agent, então envie uma mensagem para seu bot no Telegram:
+
+```
+/start
+Olá! Você poderia me ajudar a escrever um script Python?
+```
+
+O bot responde com código gerado por AI, executa ferramentas se solicitado, e mantém o contexto de conversação.
+
+### Matrix (criptografia ponta a ponta)
+
+```toml
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@zeroclaw:matrix.org"
+password = "..."
+device_name = "zeroclaw-prod"
+e2ee_enabled = true
+```
+
+Convide `@zeroclaw:matrix.org` para uma sala criptografada, e o bot responderá com criptografia completa. Veja [Guia Matrix E2EE](docs/matrix-e2ee-guide.md) para configuração de verificação de dispositivo.
+
+### Multi-Provedor
+
+```toml
+[providers.anthropic]
+enabled = true
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+enabled = true
+api_key = "sk-..."
+model = "gpt-4o"
+
+[orchestrator]
+default_provider = "anthropic"
+fallback_providers = ["openai"] # Failover em erro de provedor
+```
+
+Se Anthropic falhar ou tiver rate-limit, o orquestrador faz failover automaticamente para OpenAI.
+
+### Memória Personalizada
+
+```toml
+[memory]
+kind = "sqlite"
+path = "~/.zeroclaw/workspace/memory/conversations.db"
+retention_days = 90 # Purga automática após 90 dias
+```
+
+Ou use Markdown para armazenamento legível por humanos:
+
+```toml
+[memory]
+kind = "markdown"
+path = "~/.zeroclaw/workspace/memory/"
+```
+
+Veja [Referência de Configuração](docs/config-reference.md#memory) para todas as opções de memória.
+
+## Suporte de Provedor
+
+| Provedor | Status | API Key | Modelos de Exemplo |
+| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
+| **Anthropic** | ✅ Estável | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
+| **OpenAI** | ✅ Estável | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
+| **Google Gemini** | ✅ Estável | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
+| **Ollama** | ✅ Estável | N/A (local) | `llama3.3`, `qwen2.5`, `phi4` |
+| **Cerebras** | ✅ Estável | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
+| **Groq** | ✅ Estável | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
+| **Mistral** | 🚧 Planejado | `MISTRAL_API_KEY` | TBD |
+| **Cohere** | 🚧 Planejado | `COHERE_API_KEY` | TBD |
+
+### Endpoints Personalizados
+
+ZeroClaw suporta endpoints compatíveis com OpenAI:
+
+```toml
+[providers.custom]
+enabled = true
+api_key = "..."
+base_url = "https://api.your-llm-provider.com/v1"
+model = "your-model-name"
+```
+
+Exemplo: use [LiteLLM](https://github.com/BerriAI/litellm) como proxy para acessar qualquer LLM via interface OpenAI.
+
+Veja [Referência de Provedores](docs/providers-reference.md) para detalhes de configuração completos.
+
+## Suporte de Canal
+
+| Canal | Status | Autenticação | Notas |
+| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
+| **Telegram** | ✅ Estável | Bot Token | Suporte completo incluindo arquivos, imagens, botões inline |
+| **Matrix** | ✅ Estável | Senha ou Token | Suporte E2EE com verificação de dispositivo |
+| **Slack** | 🚧 Planejado | OAuth ou Bot Token | Requer acesso ao workspace |
+| **Discord** | 🚧 Planejado | Bot Token | Requer permissões de guild |
+| **WhatsApp** | 🚧 Planejado | Twilio ou API oficial | Requer conta business |
+| **CLI** | ✅ Estável | Nenhum | Interface conversacional direta |
+| **Web** | 🚧 Planejado | API Key ou OAuth | Interface de chat baseada em navegador |
+
+Veja [Referência de Canais](docs/channels-reference.md) para instruções de configuração completas.
+
+## Suporte de Ferramentas
+
+ZeroClaw fornece ferramentas integradas para execução de código, acesso ao sistema de arquivos e recuperação web:
+
+| Ferramenta | Descrição | Runtime Requerido |
+| -------------------- | --------------------------- | ----------------------------- |
+| **bash** | Executa comandos shell | Nativo ou Docker |
+| **python** | Executa scripts Python | Python 3.8+ (nativo) ou Docker |
+| **javascript** | Executa código Node.js | Node.js 18+ (nativo) ou Docker |
+| **filesystem_read** | Lê arquivos | Nativo ou Docker |
+| **filesystem_write** | Escreve arquivos | Nativo ou Docker |
+| **web_fetch** | Obtém conteúdo web | Nativo ou Docker |
+
+### Segurança de Execução
+
+- **Runtime Nativo** — roda como processo de usuário do daemon, acesso completo ao sistema de arquivos
+- **Runtime Docker** — isolamento completo de container, sistemas de arquivos e redes separados
+
+Configure a política de execução em `config.toml`:
+
+```toml
+[runtime]
+kind = "docker"
+allowed_tools = ["bash", "python", "filesystem_read"] # Lista de permissão explícita
+```
+
+Veja [Referência de Configuração](docs/config-reference.md#runtime) para opções de segurança completas.
+
+## Implantação
+
+### Implantação Local (Desenvolvimento)
+
+```bash
+zeroclaw daemon start
+zeroclaw agent start
+```
+
+### Implantação em Servidor (Produção)
+
+Use systemd para gerenciar o daemon e agent como serviços:
+
+```bash
+# Instale o binário
+cargo install --path . --locked
+
+# Configure o workspace
+zeroclaw init
+
+# Crie arquivos de serviço systemd
+sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
+sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
+
+# Habilite e inicie os serviços
+sudo systemctl enable zeroclaw-daemon zeroclaw-agent
+sudo systemctl start zeroclaw-daemon zeroclaw-agent
+
+# Verifique o status
+sudo systemctl status zeroclaw-daemon
+sudo systemctl status zeroclaw-agent
+```
+
+Veja [Guia de Implantação de Rede](docs/network-deployment.md) para instruções completas de implantação em produção.
+
+### Docker
+
+```bash
+# Compile a imagem
+docker build -t zeroclaw:latest .
+
+# Execute o container
+docker run -d \
+ --name zeroclaw \
+ -v ~/.zeroclaw/workspace:/workspace \
+ -e ANTHROPIC_API_KEY=sk-ant-... \
+ zeroclaw:latest
+```
+
+Veja [`Dockerfile`](Dockerfile) para detalhes de build e opções de configuração.
+
+### Hardware Edge
+
+ZeroClaw é projetado para rodar em hardware de baixo consumo:
+
+- **Raspberry Pi Zero 2 W** — ~512 MB RAM, núcleo ARMv8 único, < $5 custo de hardware
+- **Raspberry Pi 4/5** — 1 GB+ RAM, multi-núcleo, ideal para workloads concorrentes
+- **Orange Pi Zero 2** — ~512 MB RAM, quad-core ARMv8, custo ultra-baixo
+- **SBCs x86 (Intel N100)** — 4-8 GB RAM, builds rápidos, suporte Docker nativo
+
+Veja [Guia de Hardware](docs/hardware/README.md) para instruções de configuração específicas por dispositivo.
+
+## Tunneling (Exposição Pública)
+
+Exponha seu daemon ZeroClaw local à rede pública via túneis seguros:
+
+```bash
+zeroclaw tunnel start --provider cloudflare
+```
+
+Provedores de tunnel suportados:
+
+- **Cloudflare Tunnel** — HTTPS grátis, sem exposição de portas, suporte multi-domínio
+- **Ngrok** — configuração rápida, domínios personalizados (plano pago)
+- **Tailscale** — rede mesh privada, sem porta pública
+
+Veja [Referência de Configuração](docs/config-reference.md#tunnel) para opções de configuração completas.
+
+## Segurança
+
+ZeroClaw implementa múltiplas camadas de segurança:
+
+### Emparelhamento
+
+O daemon gera um segredo de emparelhamento no primeiro início armazenado em `~/.zeroclaw/workspace/.pairing`. Clientes (agent, CLI) devem apresentar este segredo para conectar.
+
+```bash
+zeroclaw pairing rotate # Gera um novo segredo e invalida o anterior
+```
+
+### Sandboxing
+
+- **Runtime Docker** — isolamento completo de container com sistemas de arquivos e redes separados
+- **Runtime Nativo** — roda como processo de usuário, com escopo de workspace por padrão
+
+### Listas de Permissão
+
+Canais podem restringir acesso por ID de usuário:
+
+```toml
+[channels.telegram]
+enabled = true
+allowed_users = [123456789, 987654321] # Lista de permissão explícita
+```
+
+### Criptografia
+
+- **Matrix E2EE** — criptografia ponta a ponta completa com verificação de dispositivo
+- **Transporte TLS** — todo o tráfego de API e tunnel usa HTTPS/TLS
+
+Veja [Documentação de Segurança](docs/security/README.md) para políticas e práticas completas.
+
+## Observabilidade
+
+ZeroClaw registra logs em `~/.zeroclaw/workspace/logs/` por padrão. Os logs são armazenados por componente:
+
+```
+~/.zeroclaw/workspace/logs/
+├── daemon.log # Logs do daemon (início, requisições API, erros)
+├── agent.log # Logs do agent (roteamento de mensagens, execução de ferramentas)
+├── telegram.log # Logs específicos do canal (se habilitado)
+└── matrix.log # Logs específicos do canal (se habilitado)
+```
+
+### Configuração de Logging
+
+```toml
+[logging]
+level = "info" # debug, info, warn, error
+path = "~/.zeroclaw/workspace/logs/"
+rotation = "daily" # daily, hourly, size
+max_size_mb = 100 # Para rotação baseada em tamanho
+retention_days = 30 # Purga automática após N dias
+```
+
+Veja [Referência de Configuração](docs/config-reference.md#logging) para todas as opções de logging.
+
+### Métricas (Planejado)
+
+Suporte a métricas Prometheus para monitoramento em produção em breve. Rastreamento em [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
+
+## Habilidades (Skills)
+
+ZeroClaw suporta habilidades personalizadas — módulos reutilizáveis que estendem as capacidades do sistema.
+
+### Definição de Habilidade
+
+Habilidades são armazenadas em `~/.zeroclaw/workspace/skills//` com esta estrutura:
+
+```
+skills/
+└── my-skill/
+ ├── skill.toml # Metadados da habilidade (nome, descrição, dependências)
+ ├── prompt.md # Prompt de sistema para a AI
+ └── tools/ # Ferramentas personalizadas opcionais
+ └── my_tool.py
+```
+
+### Exemplo de Habilidade
+
+```toml
+# skills/web-research/skill.toml
+[skill]
+name = "web-research"
+description = "Pesquisa na web e resume resultados"
+version = "1.0.0"
+
+[dependencies]
+tools = ["web_fetch", "bash"]
+```
+
+```markdown
+
+
+Você é um assistente de pesquisa. Quando pedirem para pesquisar algo:
+
+1. Use web_fetch para obter o conteúdo
+2. Resuma os resultados em um formato fácil de ler
+3. Cite as fontes com URLs
+```
+
+### Uso de Habilidades
+
+Habilidades são carregadas automaticamente no início do agent. Referencie-as por nome em conversas:
+
+```
+Usuário: Use a habilidade web-research para encontrar as últimas notícias de AI
+Bot: [carrega a habilidade web-research, executa web_fetch, resume resultados]
+```
+
+Veja seção [Habilidades (Skills)](#habilidades-skills) para instruções completas de criação de habilidades.
+
+## Open Skills
+
+ZeroClaw suporta [Open Skills](https://github.com/openagents-com/open-skills) — um sistema modular e agnóstico de provedores para estender capacidades de agentes AI.
+
+### Habilitar Open Skills
+
+```toml
+[skills]
+open_skills_enabled = true
+# open_skills_dir = "/path/to/open-skills" # opcional
+```
+
+Você também pode sobrescrever em runtime com `ZEROCLAW_OPEN_SKILLS_ENABLED` e `ZEROCLAW_OPEN_SKILLS_DIR`.
+
+## Desenvolvimento
+
+```bash
+cargo build # Build de desenvolvimento
+cargo build --release # Build release (codegen-units=1, funciona em todos os dispositivos incluindo Raspberry Pi)
+cargo build --profile release-fast # Build mais rápido (codegen-units=8, requer 16 GB+ RAM)
+cargo test # Executa o suite de testes completo
+cargo clippy --locked --all-targets -- -D clippy::correctness
+cargo fmt # Formato
+
+# Executa o benchmark de comparação SQLite vs Markdown
+cargo test --test memory_comparison -- --nocapture
+```
+
+### Hook pre-push
+
+Um hook de git executa `cargo fmt --check`, `cargo clippy -- -D warnings`, e `cargo test` antes de cada push. Ative-o uma vez:
+
+```bash
+git config core.hooksPath .githooks
+```
+
+### Solução de Problemas de Build (erros OpenSSL no Linux)
+
+Se você encontrar um erro de build `openssl-sys`, sincronize dependências e recompile com o lockfile do repositório:
+
+```bash
+git pull
+cargo build --release --locked
+cargo install --path . --force --locked
+```
+
+ZeroClaw está configurado para usar `rustls` para dependências HTTP/TLS; `--locked` mantém o grafo transitivo determinístico em ambientes limpios.
+
+Para pular o hook quando precisar de um push rápido durante desenvolvimento:
+
+```bash
+git push --no-verify
+```
+
+## Colaboração e Docs
+
+Comece com o hub de documentação para um mapa baseado em tarefas:
+
+- Hub de Documentação: [`docs/README.md`](docs/README.md)
+- Índice Unificado de Docs: [`docs/SUMMARY.md`](docs/SUMMARY.md)
+- Referência de Comandos: [`docs/commands-reference.md`](docs/commands-reference.md)
+- Referência de Configuração: [`docs/config-reference.md`](docs/config-reference.md)
+- Referência de Provedores: [`docs/providers-reference.md`](docs/providers-reference.md)
+- Referência de Canais: [`docs/channels-reference.md`](docs/channels-reference.md)
+- Runbook de Operações: [`docs/operations-runbook.md`](docs/operations-runbook.md)
+- Solução de Problemas: [`docs/troubleshooting.md`](docs/troubleshooting.md)
+- Inventário/Classificação de Docs: [`docs/docs-inventory.md`](docs/docs-inventory.md)
+- Snapshot de Triage de PR/Issue (em 18 de fev. de 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
+
+Referências principais de colaboração:
+
+- Hub de Documentação: [docs/README.md](docs/README.md)
+- Modelo de Documentação: [docs/doc-template.md](docs/doc-template.md)
+- Checklist de Mudança de Documentação: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
+- Referência de Configuração de Canais: [docs/channels-reference.md](docs/channels-reference.md)
+- Operações de Salas Criptografadas Matrix: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
+- Guia de Contribuição: [CONTRIBUTING.md](CONTRIBUTING.md)
+- Política de Fluxo de Trabalho PR: [docs/pr-workflow.md](docs/pr-workflow.md)
+- Playbook do Revisor (triage + revisão profunda): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
+- Mapa de Propriedade e Triage CI: [docs/ci-map.md](docs/ci-map.md)
+- Política de Divulgação de Segurança: [SECURITY.md](SECURITY.md)
+
+Para implantação e operações de runtime:
+
+- Guia de Implantação de Rede: [docs/network-deployment.md](docs/network-deployment.md)
+- Playbook de Agent Proxy: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
+
+## Apoiar o ZeroClaw
+
+Se ZeroClaw ajuda seu trabalho e você deseja apoiar o desenvolvimento contínuo, você pode doar aqui:
+
+
+
+### 🙏 Agradecimentos Especiais
+
+Um sincero agradecimento às comunidades e instituições que inspiram e alimentam este trabalho de código aberto:
+
+- **Harvard University** — por fomentar a curiosidade intelectual e empurrar os limites do possível.
+- **MIT** — por defender o conhecimento aberto, o código aberto, e a convicção de que a tecnologia deveria ser acessível a todos.
+- **Sundai Club** — pela comunidade, energia, e vontade incessante de construir coisas que importam.
+- **O Mundo e Além** 🌍✨ — a cada contribuidor, sonhador, e construtor lá fora que faz do código aberto uma força para o bem. Isso é por você.
+
+Construímos em código aberto porque as melhores ideias vêm de todo lugar. Se você está lendo isso, você é parte disso. Bem-vindo. 🦀❤️
+
+## ⚠️ Repositório Oficial e Aviso de Falsificação
+
+**Este é o único repositório oficial do ZeroClaw:**
+
+>
+
+Qualquer outro repositório, organização, domínio ou pacote que afirme ser "ZeroClaw" ou que implique afiliação com ZeroClaw Labs é **não autorizado e não é afiliado a este projeto**. Forks não autorizados conhecidos serão listados em [TRADEMARK.md](TRADEMARK.md).
+
+Se você encontrar falsificação ou uso indevido de marca, por favor [abra uma issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
+
+---
+
+## Licença
+
+ZeroClaw tem licença dupla para máxima abertura e proteção de contribuidores:
+
+| Licença | Casos de Uso |
+| ---------------------------- | ------------------------------------------------------------ |
+| [MIT](LICENSE-MIT) | Código aberto, pesquisa, acadêmico, uso pessoal |
+| [Apache 2.0](LICENSE-APACHE) | Proteção de patentes, institucional, implantação comercial |
+
+Você pode escolher qualquer uma das licenças. **Os contribuidores concedem automaticamente direitos sob ambas** — veja [CLA.md](CLA.md) para o acordo de contribuidor completo.
+
+### Marca
+
+O nome **ZeroClaw** e o logo são marcas registradas da ZeroClaw Labs. Esta licença não concede permissão para usá-los para implicar aprovação ou afiliação. Veja [TRADEMARK.md](TRADEMARK.md) para usos permitidos e proibidos.
+
+### Proteções do Contribuidor
+
+- **Você mantém os direitos autorais** de suas contribuições
+- **Concessão de patentes** (Apache 2.0) protege você contra reivindicações de patentes por outros contribuidores
+- Suas contribuições são **atribuídas permanentemente** no histórico de commits e [NOTICE](NOTICE)
+- Nenhum direito de marca é transferido ao contribuir
+
+## Contribuir
+
+Veja [CONTRIBUTING.md](CONTRIBUTING.md) e [CLA.md](CLA.md). Implemente um trait, envie uma PR:
+
+- Guia de fluxo de trabalho CI: [docs/ci-map.md](docs/ci-map.md)
+- Novo `Provider` → `src/providers/`
+- Novo `Channel` → `src/channels/`
+- Novo `Observer` → `src/observability/`
+- Novo `Tool` → `src/tools/`
+- Nova `Memory` → `src/memory/`
+- Novo `Tunnel` → `src/tunnel/`
+- Nova `Skill` → `~/.zeroclaw/workspace/skills//`
+
+---
+
+**ZeroClaw** — Zero sobrecarga. Zero compromisso. Implante em qualquer lugar. Troque qualquer coisa. 🦀
+
+## Histórico de Estrelas
+
+
diff --git a/README.ro.md b/README.ro.md
new file mode 100644
index 000000000..7130e77c8
--- /dev/null
+++ b/README.ro.md
@@ -0,0 +1,179 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Zero overhead. Zero compromisuri. 100% Rust. 100% Agnostic.
+ ⚡️ Rulează pe hardware de $10 cu <5MB RAM: Asta e cu 99% mai puțină memorie decât OpenClaw și cu 98% mai ieftin decât un Mac mini!
+
+
+---
+
+## Ce este ZeroClaw?
+
+ZeroClaw este o infrastructură de asistent AI ușoară, mutabilă și extensibilă construită în Rust. Conectează diverși furnizori de LLM (Anthropic, OpenAI, Google, Ollama, etc.) printr-o interfață unificată și suportă multiple canale (Telegram, Matrix, CLI, etc.).
+
+### Caracteristici Principale
+
+- **🦀 Scris în Rust**: Performanță ridicată, siguranță a memoriei și abstracțiuni fără costuri
+- **🔌 Agnostic față de furnizori**: Suportă OpenAI, Anthropic, Google Gemini, Ollama și alții
+- **📱 Multi-canal**: Telegram, Matrix (cu E2EE), CLI și altele
+- **🧠 Memorie modulară**: Backend-uri SQLite și Markdown
+- **🛠️ Instrumente extensibile**: Adaugă instrumente personalizate cu ușurință
+- **🔒 Securitate pe primul loc**: Reverse proxy, design axat pe confidențialitate
+
+---
+
+## Start Rapid
+
+### Cerințe
+
+- Rust 1.70+
+- O cheie API de furnizor LLM (Anthropic, OpenAI, etc.)
+
+### Instalare
+
+```bash
+# Clonează repository-ul
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# Construiește
+cargo build --release
+
+# Rulează
+cargo run --release
+```
+
+### Cu Docker
+
+```bash
+docker run -d \
+ --name zeroclaw \
+ -e ANTHROPIC_API_KEY=your_key \
+ -v zeroclaw-data:/app/data \
+ zeroclaw/zeroclaw:latest
+```
+
+---
+
+## Configurare
+
+ZeroClaw folosește un fișier de configurare YAML. În mod implicit, caută `config.yaml`.
+
+```yaml
+# Furnizor implicit
+provider: anthropic
+
+# Configurare furnizori
+providers:
+ anthropic:
+ api_key: ${ANTHROPIC_API_KEY}
+ model: claude-3-5-sonnet-20241022
+ openai:
+ api_key: ${OPENAI_API_KEY}
+ model: gpt-4o
+
+# Configurare memorie
+memory:
+ backend: sqlite
+ path: data/memory.db
+
+# Configurare canale
+channels:
+ telegram:
+ token: ${TELEGRAM_BOT_TOKEN}
+```
+
+---
+
+## Documentație
+
+Pentru documentație detaliată, vezi:
+
+- [Hub Documentație](docs/README.md)
+- [Referință Comenzi](docs/commands-reference.md)
+- [Referință Furnizori](docs/providers-reference.md)
+- [Referință Canale](docs/channels-reference.md)
+- [Referință Configurare](docs/config-reference.md)
+
+---
+
+## Contribuții
+
+Contribuțiile sunt binevenite! Te rugăm să citești [Ghidul de Contribuții](CONTRIBUTING.md).
+
+---
+
+## Licență
+
+Acest proiect este licențiat dual:
+
+- MIT License
+- Apache License, versiunea 2.0
+
+Vezi [LICENSE-APACHE](LICENSE-APACHE) și [LICENSE-MIT](LICENSE-MIT) pentru detalii.
+
+---
+
+## Comunitate
+
+- [Telegram](https://t.me/zeroclawlabs)
+- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
+- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
+
+---
+
+## Sponsori
+
+Dacă ZeroClaw îți este util, te rugăm să iei în considerare să ne cumperi o cafea:
+
+[](https://buymeacoffee.com/argenistherose)
diff --git a/README.sv.md b/README.sv.md
new file mode 100644
index 000000000..3ca4d45e5
--- /dev/null
+++ b/README.sv.md
@@ -0,0 +1,179 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Noll overhead. Noll kompromiss. 100% Rust. 100% Agnostisk.
+ ⚡️ Kör på $10 hårdvara med <5MB RAM: Det är 99% mindre minne än OpenClaw och 98% billigare än en Mac mini!
+
+ Zero overhead. Zero compromise. 100% Rust. 100% Agnostic.
+ ⚡️ Tumatakbo sa $10 hardware na may <5MB RAM: Ito ay 99% mas kaunting memorya kaysa sa OpenClaw at 98% mas mura kaysa sa isang Mac mini!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Binuo ng mga mag-aaral at miyembro ng Harvard, MIT, at Sundai.Club na komunidad.
+
+ Mabilis, magaan, at ganap na autonomous na AI assistant infrastructure
+ I-deploy kahit saan. I-swap ang anumang bagay.
+
+
+
+ Ang ZeroClaw ay ang runtime operating system para sa agent workflows — isang infrastructure na nag-a-abstract ng mga modelo, tools, memory, at execution upang bumuo ng mga agent nang isang beses at patakbuhin ang mga ito kahit saan.
+
+
+
Trait-driven architecture · secure-by-default runtime · swappable provider/channel/tool · lahat ay pluggable
+
+### 📢 Mga Anunsyo
+
+Gamitin ang talahanayang ito para sa mahahalagang paunawa (compatibility changes, security notices, maintenance windows, at version blocks).
+
+| Petsa (UTC) | Antas | Paunawa | Aksyon |
+| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 2026-02-19 | _Kritikal_ | **Hindi kami kaugnay** sa `openagen/zeroclaw` o `zeroclaw.org`. Ang domain na `zeroclaw.org` ay kasalukuyang tumuturo sa fork na `openagen/zeroclaw`, at ang domain/repository na ito ay nanggagaya sa aming opisyal na website/proyekto. | Huwag magtiwala sa impormasyon, binaries, fundraising, o mga anunsyo mula sa mga pinagmulang ito. Gamitin lamang [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) at aming mga verified social media accounts. |
+| 2026-02-21 | _Mahalaga_ | Ang aming opisyal na website ay ngayon online: [zeroclawlabs.ai](https://zeroclawlabs.ai). Salamat sa iyong pasensya sa panahon ng paghihintay. Nakikita pa rin namin ang mga pagtatangka ng panliliko: huwag lumahok sa anumang investment/funding activity sa ngalan ng ZeroClaw kung hindi ito nai-publish sa pamamagitan ng aming mga opisyal na channel. | Gamitin [ang repository na ito](https://github.com/zeroclaw-labs/zeroclaw) bilang nag-iisang source of truth. Sundan [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grupo)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/), at [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search) para sa mga opisyal na update. |
+| 2026-02-19 | _Mahalaga_ | In-update ng Anthropic ang authentication at credential use terms noong 2026-02-19. Ang OAuth authentication (Free, Pro, Max) ay eksklusibo para sa Claude Code at Claude.ai; ang paggamit ng Claude Free/Pro/Max OAuth tokens sa anumang iba pang produkto, tool, o serbisyo (kasama ang Agent SDK) ay hindi pinapayagan at maaaring lumabag sa Consumer Terms of Use. | Mangyaring pansamantalang iwasan ang Claude Code OAuth integrations upang maiwasan ang anumang potensyal na pagkawala. Orihinal na clause: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
+
+### ✨ Mga Tampok
+
+- 🏎️ **Lightweight Runtime by Default:** Ang mga karaniwang CLI workflows at status commands ay tumatakbo sa loob ng ilang megabytes ng memory footprint sa production builds.
+- 💰 **Cost-Effective Deployment:** Dinisenyo para sa low-cost boards at maliliit na cloud instances nang walang mga heavy runtime dependencies.
+- ⚡ **Fast Cold Starts:** Ang single-binary Rust runtime ay nagpapanatili ng command at daemon startup na halos instant para sa pang-araw-araw na operasyon.
+- 🌍 **Portable Architecture:** Isang single-binary workflow sa ARM, x86, at RISC-V na may swappable na provider/channel/tool.
+
+### Bakit pinipili ng mga team ang ZeroClaw
+
+- **Lightweight by default:** maliit na Rust binary, mabilis na startup, mababang memory footprint.
+- **Secure by design:** pairing, strict sandboxing, explicit allowlists, workspace scope.
+- **Fully swappable:** ang core systems ay traits (providers, channels, tools, memory, tunnels).
+- **No vendor lock-in:** OpenAI-compatible provider support + pluggable custom endpoints.
+
+## Benchmark Snapshot (ZeroClaw vs OpenClaw, Reproducible)
+
+Mabilis na benchmark sa lokal na machine (macOS arm64, Peb. 2026) na normalized para sa 0.8 GHz edge hardware.
+
+| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
+| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
+| **Wika** | TypeScript | Python | Go | **Rust** |
+| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
+| **Startup (0.8 GHz core)** | > 500s | > 30s | < 1s | **< 10ms** |
+| **Binary Size** | ~28 MB (dist) | N/A (Scripts) | ~8 MB | **3.4 MB** |
+| **Gastos** | Mac Mini $599 | Linux SBC ~$50 | Linux board $10 | **Kahit anong hardware $10** |
+
+> Mga Tala: Ang mga resulta ng ZeroClaw ay sinusukat sa production builds gamit ang `/usr/bin/time -l`. Ang OpenClaw ay nangangailangan ng Node.js runtime (typically ~390 MB additional memory overhead), habang ang NanoBot ay nangangailangan ng Python runtime. Ang PicoClaw at ZeroClaw ay static binaries. Ang mga RAM figure sa itaas ay runtime memory; ang build-time compilation requirements ay mas mataas.
+
+
+
+
+
+### Reproducible Local Measurement
+
+Ang mga benchmark claim ay maaaring mag-drift habang ang code at toolchains ay nag-e-evolve, kaya palaging sukatin ang iyong current build locally:
+
+```bash
+cargo build --release
+ls -lh target/release/zeroclaw
+
+/usr/bin/time -l target/release/zeroclaw --help
+/usr/bin/time -l target/release/zeroclaw status
+```
+
+Halimbawa ng sample (macOS arm64, nasukat noong Pebrero 18, 2026):
+
+- Release binary size: `8.8M`
+- `zeroclaw --help`: real time na humigit-kumulang `0.02s`, peak memory footprint ~`3.9 MB`
+- `zeroclaw status`: real time na humigit-kumulang `0.01s`, peak memory footprint ~`4.1 MB`
+
+## Mga Kinakailangan
+
+
+Windows
+
+### Windows — Kinakailangan
+
+1. **Visual Studio Build Tools** (nagbibigay ng MSVC linker at Windows SDK):
+
+ ```powershell
+ winget install Microsoft.VisualStudio.2022.BuildTools
+ ```
+
+ Sa panahon ng installation (o sa pamamagitan ng Visual Studio Installer), piliin ang **"Desktop development with C++"** workload.
+
+2. **Rust Toolchain:**
+
+ ```powershell
+ winget install Rustlang.Rustup
+ ```
+
+ Pagkatapos ng installation, magbukas ng bagong terminal at patakbuhin ang `rustup default stable` upang matiyak na ang stable toolchain ay aktibo.
+
+3. **I-verify** na ang pareho ay gumagana:
+ ```powershell
+ rustc --version
+ cargo --version
+ ```
+
+### Windows — Opsyonal
+
+- **Docker Desktop** — kinakailangan lamang kung gagamit ka ng [Docker sandboxed runtime](#current-runtime-support) (`runtime.kind = "docker"`). I-install sa pamamagitan ng `winget install Docker.DockerDesktop`.
+
+
+
+
+Linux / macOS
+
+### Linux / macOS — Kinakailangan
+
+1. **Essential build tools:**
+ - **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
+ - **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
+ - **macOS:** I-install ang Xcode Command Line Tools: `xcode-select --install`
+
+2. **Rust Toolchain:**
+
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+
+ Tingnan ang [rustup.rs](https://rustup.rs) para sa mga detalye.
+
+3. **I-verify:**
+ ```bash
+ rustc --version
+ cargo --version
+ ```
+
+### Linux / macOS — Opsyonal
+
+- **Docker** — kinakailangan lamang kung gagamit ka ng [Docker sandboxed runtime](#current-runtime-support) (`runtime.kind = "docker"`).
+ - **Linux (Debian/Ubuntu):** tingnan ang [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/)
+ - **Linux (Fedora/RHEL):** tingnan ang [docs.docker.com](https://docs.docker.com/engine/install/fedora/)
+ - **macOS:** i-install ang Docker Desktop sa pamamagitan ng [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/)
+
+
+
+## Mabilis na Pagsisimula
+
+### Option 1: Automated setup (inirerekomenda)
+
+Ang `bootstrap.sh` script ay nag-i-install ng Rust, nagi-clone ng ZeroClaw, nagi-compile, at nagse-set up ng iyong paunang development environment:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
+```
+
+Ito ay:
+
+1. Mag-i-install ng Rust (kung wala)
+2. Magi-clone ng ZeroClaw repository
+3. Magi-compile ng ZeroClaw sa release mode
+4. Mag-i-install ng `zeroclaw` sa `~/.cargo/bin/`
+5. Gagawa ng default workspace structure sa `~/.zeroclaw/workspace/`
+6. Gagawa ng paunang configuration file na `~/.zeroclaw/workspace/config.toml`
+
+Pagkatapos ng bootstrap, i-reload ang iyong shell o patakbuhin ang `source ~/.cargo/env` para gamitin ang `zeroclaw` command globally.
+
+### Option 2: Manual installation
+
+
+I-click para makita ang mga manual installation steps
+
+```bash
+# 1. I-clone ang repository
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# 2. I-compile sa release
+cargo build --release --locked
+
+# 3. I-install ang binary
+cargo install --path . --locked
+
+# 4. I-initialize ang workspace
+zeroclaw init
+
+# 5. I-verify ang installation
+zeroclaw --version
+zeroclaw status
+```
+
+
+
+### Pagkatapos ng Installation
+
+Kapag na-install (sa pamamagitan ng bootstrap o manual), dapat mong makita:
+
+```
+~/.zeroclaw/workspace/
+├── config.toml # Main configuration
+├── .pairing # Pairing secrets (generated on first launch)
+├── logs/ # Daemon/agent logs
+├── skills/ # Custom skills
+└── memory/ # Conversation context storage
+```
+
+**Mga susunod na hakbang:**
+
+1. I-configure ang iyong AI providers sa `~/.zeroclaw/workspace/config.toml`
+2. Tingnan ang [configuration reference](docs/config-reference.md) para sa advanced options
+3. Simulan ang agent: `zeroclaw agent start`
+4. I-test sa pamamagitan ng iyong preferred channel (tingnan ang [channels reference](docs/channels-reference.md))
+
+## Configuration
+
+I-edit ang `~/.zeroclaw/workspace/config.toml` para i-configure ang providers, channels, at system behavior.
+
+### Quick Configuration Reference
+
+```toml
+[providers.anthropic]
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+api_key = "sk-..."
+model = "gpt-4o"
+
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@bot:matrix.org"
+password = "..."
+
+[memory]
+kind = "markdown" # o "sqlite" o "none"
+
+[runtime]
+kind = "native" # o "docker" (nangangailangan ng Docker)
+```
+
+**Mga kumpletong reference document:**
+
+- [Configuration Reference](docs/config-reference.md) — lahat ng settings, validations, defaults
+- [Providers Reference](docs/providers-reference.md) — AI provider-specific configurations
+- [Channels Reference](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord, at higit pa
+- [Operations](docs/operations-runbook.md) — production monitoring, secret rotation, scaling
+
+### Current Runtime Support
+
+Sinusuportahan ng ZeroClaw ang dalawang code execution backends:
+
+- **`native`** (default) — direct process execution, pinakamabilis na path, ideal para sa trusted environments
+- **`docker`** — full container isolation, hardened security policies, nangangailangan ng Docker
+
+Gamitin ang `runtime.kind = "docker"` kung kailangan mo ng strict sandboxing o network isolation. Tingnan ang [configuration reference](docs/config-reference.md#runtime) para sa buong detalye.
+
+## Mga Command
+
+```bash
+# Workspace management
+zeroclaw init # Nag-initialize ng bagong workspace
+zeroclaw status # Nagpapakita ng daemon/agent status
+zeroclaw config validate # Nag-verify ng config.toml syntax at values
+
+# Daemon management
+zeroclaw daemon start # Nagse-start ng daemon sa background
+zeroclaw daemon stop # Naghihinto sa running daemon
+zeroclaw daemon restart # Nagre-restart ng daemon (config reload)
+zeroclaw daemon logs # Nagpapakita ng daemon logs
+
+# Agent management
+zeroclaw agent start # Nagse-start ng agent (nangangailangan ng running daemon)
+zeroclaw agent stop # Naghihinto sa agent
+zeroclaw agent restart # Nagre-restart ng agent (config reload)
+
+# Pairing operations
+zeroclaw pairing init # Nag-generate ng bagong pairing secret
+zeroclaw pairing rotate # Nag-rotate ng existing pairing secret
+
+# Tunneling (para sa public exposure)
+zeroclaw tunnel start # Nagse-start ng tunnel sa local daemon
+zeroclaw tunnel stop # Naghihinto sa active tunnel
+
+# Diagnostics
+zeroclaw doctor # Nagpapatakbo ng system health checks
+zeroclaw version # Nagpapakita ng version at build info
+```
+
+Tingnan ang [Commands Reference](docs/commands-reference.md) para sa buong options at examples.
+
+## Architecture
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Channels (trait) │
+│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Custom │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Agent Orchestrator │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Message │ │ Context │ │ Tool │ │
+│ │ Routing │ │ Memory │ │ Execution │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ┌───────────────┼───────────────┐
+ ▼ ▼ ▼
+┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+│ Providers │ │ Memory │ │ Tools │
+│ (trait) │ │ (trait) │ │ (trait) │
+├──────────────┤ ├──────────────┤ ├──────────────┤
+│ Anthropic │ │ Markdown │ │ Filesystem │
+│ OpenAI │ │ SQLite │ │ Bash │
+│ Gemini │ │ None │ │ Web Fetch │
+│ Ollama │ │ Custom │ │ Custom │
+│ Custom │ └──────────────┘ └──────────────┘
+└──────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Runtime (trait) │
+│ Native │ Docker │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+**Mga pangunahing prinsipyo:**
+
+- Ang lahat ay isang **trait** — providers, channels, tools, memory, tunnels
+- Ang mga channel ay tumatawag sa orchestrator; ang orchestrator ay tumatawag sa providers + tools
+- Ang memory system ay nagmamaneho ng conversation context (markdown, SQLite, o none)
+- Ang runtime ay nag-a-abstract ng code execution (native o Docker)
+- Walang provider lock-in — i-swap ang Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama nang walang code changes
+
+Tingnan ang [architecture documentation](docs/architecture.svg) para sa mga detalyadong diagram at implementation details.
+
+## Mga Halimbawa
+
+### Telegram Bot
+
+```toml
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+allowed_users = [987654321] # Ang iyong Telegram user ID
+```
+
+Simulan ang daemon + agent, pagkatapos ay magpadala ng mensahe sa iyong bot sa Telegram:
+
+```
+/start
+Hello! Could you help me write a Python script?
+```
+
+Ang bot ay tumutugon gamit ang AI-generated code, nagpapatupad ng mga tool kung hiniling, at nagpapanatili ng conversation context.
+
+### Matrix (end-to-end encryption)
+
+```toml
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@zeroclaw:matrix.org"
+password = "..."
+device_name = "zeroclaw-prod"
+e2ee_enabled = true
+```
+
+Imbitahan ang `@zeroclaw:matrix.org` sa isang encrypted room, at ang bot ay tutugon gamit ang full encryption. Tingnan ang [Matrix E2EE Guide](docs/matrix-e2ee-guide.md) para sa device verification setup.
+
+### Multi-Provider
+
+```toml
+[providers.anthropic]
+enabled = true
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+enabled = true
+api_key = "sk-..."
+model = "gpt-4o"
+
+[orchestrator]
+default_provider = "anthropic"
+fallback_providers = ["openai"] # Failover on provider error
+```
+
+Kung ang Anthropic ay mabigo o ma-rate-limit, ang orchestrator ay awtomatikong mag-failover sa OpenAI.
+
+### Custom Memory
+
+```toml
+[memory]
+kind = "sqlite"
+path = "~/.zeroclaw/workspace/memory/conversations.db"
+retention_days = 90 # Automatic purge after 90 days
+```
+
+O gamitin ang Markdown para sa human-readable storage:
+
+```toml
+[memory]
+kind = "markdown"
+path = "~/.zeroclaw/workspace/memory/"
+```
+
+Tingnan ang [Configuration Reference](docs/config-reference.md#memory) para sa lahat ng memory options.
+
+## Provider Support
+
+| Provider | Status | API Key | Example Models |
+| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
+| **Anthropic** | ✅ Stable | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
+| **OpenAI** | ✅ Stable | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
+| **Google Gemini** | ✅ Stable | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
+| **Ollama** | ✅ Stable | N/A (local) | `llama3.3`, `qwen2.5`, `phi4` |
+| **Cerebras** | ✅ Stable | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
+| **Groq** | ✅ Stable | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
+| **Mistral** | 🚧 Planned | `MISTRAL_API_KEY` | TBD |
+| **Cohere** | 🚧 Planned | `COHERE_API_KEY` | TBD |
+
+### Custom Endpoints
+
+Sinusuportahan ng ZeroClaw ang OpenAI-compatible endpoints:
+
+```toml
+[providers.custom]
+enabled = true
+api_key = "..."
+base_url = "https://api.your-llm-provider.com/v1"
+model = "your-model-name"
+```
+
+Halimbawa: gamitin ang [LiteLLM](https://github.com/BerriAI/litellm) bilang proxy para ma-access ang anumang LLM sa pamamagitan ng OpenAI interface.
+
+Tingnan ang [Providers Reference](docs/providers-reference.md) para sa kumpletong configuration details.
+
+## Channel Support
+
+| Channel | Status | Authentication | Notes |
+| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
+| **Telegram** | ✅ Stable | Bot Token | Full support including files, images, inline buttons |
+| **Matrix** | ✅ Stable | Password or Token | E2EE support with device verification |
+| **Slack** | 🚧 Planned | OAuth or Bot Token | Requires workspace access |
+| **Discord** | 🚧 Planned | Bot Token | Requires guild permissions |
+| **WhatsApp** | 🚧 Planned | Twilio or official API | Requires business account |
+| **CLI** | ✅ Stable | None | Direct conversational interface |
+| **Web** | 🚧 Planned | API Key or OAuth | Browser-based chat interface |
+
+Tingnan ang [Channels Reference](docs/channels-reference.md) para sa kumpletong configuration instructions.
+
+## Tool Support
+
+Nagbibigay ang ZeroClaw ng built-in tools para sa code execution, filesystem access, at web retrieval:
+
+| Tool | Description | Required Runtime |
+| -------------------- | --------------------------- | ----------------------------- |
+| **bash** | Executes shell commands | Native or Docker |
+| **python** | Executes Python scripts | Python 3.8+ (native) or Docker |
+| **javascript** | Executes Node.js code | Node.js 18+ (native) or Docker |
+| **filesystem_read** | Reads files | Native or Docker |
+| **filesystem_write** | Writes files | Native or Docker |
+| **web_fetch** | Fetches web content | Native or Docker |
+
+### Execution Security
+
+- **Native Runtime** — runs as daemon's user process, full filesystem access
+- **Docker Runtime** — full container isolation, separate filesystems and networks
+
+I-configure ang execution policy sa `config.toml`:
+
+```toml
+[runtime]
+kind = "docker"
+allowed_tools = ["bash", "python", "filesystem_read"] # Explicit allowlist
+```
+
+Tingnan ang [Configuration Reference](docs/config-reference.md#runtime) para sa kumpletong security options.
+
+## Deployment
+
+### Local Deployment (Development)
+
+```bash
+zeroclaw daemon start
+zeroclaw agent start
+```
+
+### Server Deployment (Production)
+
+Gamitin ang systemd para mamaneho ang daemon at agent bilang services:
+
+```bash
+# I-install ang binary
+cargo install --path . --locked
+
+# I-configure ang workspace
+zeroclaw init
+
+# Gumawa ng systemd service files
+sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
+sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
+
+# I-enable at i-start ang services
+sudo systemctl enable zeroclaw-daemon zeroclaw-agent
+sudo systemctl start zeroclaw-daemon zeroclaw-agent
+
+# I-verify ang status
+sudo systemctl status zeroclaw-daemon
+sudo systemctl status zeroclaw-agent
+```
+
+Tingnan ang [Network Deployment Guide](docs/network-deployment.md) para sa kumpletong production deployment instructions.
+
+### Docker
+
+```bash
+# I-build ang image
+docker build -t zeroclaw:latest .
+
+# I-run ang container
+docker run -d \
+ --name zeroclaw \
+ -v ~/.zeroclaw/workspace:/workspace \
+ -e ANTHROPIC_API_KEY=sk-ant-... \
+ zeroclaw:latest
+```
+
+Tingnan ang [`Dockerfile`](Dockerfile) para sa build details at configuration options.
+
+### Edge Hardware
+
+Ang ZeroClaw ay dinisenyo para tumakbo sa low-power hardware:
+
+- **Raspberry Pi Zero 2 W** — ~512 MB RAM, single ARMv8 core, < $5 hardware cost
+- **Raspberry Pi 4/5** — 1 GB+ RAM, multi-core, ideal for concurrent workloads
+- **Orange Pi Zero 2** — ~512 MB RAM, quad-core ARMv8, ultra-low cost
+- **x86 SBCs (Intel N100)** — 4-8 GB RAM, fast builds, native Docker support
+
+Tingnan ang [Hardware Guide](docs/hardware/README.md) para sa device-specific setup instructions.
+
+## Tunneling (Public Exposure)
+
+I-expose ang iyong local ZeroClaw daemon sa public network sa pamamagitan ng secure tunnels:
+
+```bash
+zeroclaw tunnel start --provider cloudflare
+```
+
+Mga supported tunnel provider:
+
+- **Cloudflare Tunnel** — free HTTPS, no port exposure, multi-domain support
+- **Ngrok** — quick setup, custom domains (paid plan)
+- **Tailscale** — private mesh network, no public port
+
+Tingnan ang [Configuration Reference](docs/config-reference.md#tunnel) para sa kumpletong configuration options.
+
+## Security
+
+Nagpapatupad ang ZeroClaw ng maraming layer ng security:
+
+### Pairing
+
+Ang daemon ay nag-generate ng pairing secret sa unang launch na nakaimbak sa `~/.zeroclaw/workspace/.pairing`. Ang mga client (agent, CLI) ay dapat mag-present ng secret na ito para kumonekta.
+
+```bash
+zeroclaw pairing rotate # Gagawa ng bagong secret at i-invalidate ang dati
+```
+
+### Sandboxing
+
+- **Docker Runtime** — full container isolation na may separate filesystems at networks
+- **Native Runtime** — runs as user process, scoped sa workspace by default
+
+### Allowlists
+
+Ang mga channel ay maaaring mag-limit ng access by user ID:
+
+```toml
+[channels.telegram]
+enabled = true
+allowed_users = [123456789, 987654321] # Explicit allowlist
+```
+
+### Encryption
+
+- **Matrix E2EE** — full end-to-end encryption with device verification
+- **TLS Transport** — all API and tunnel traffic uses HTTPS/TLS
+
+Tingnan ang [Security Documentation](docs/security/README.md) para sa kumpletong policies at practices.
+
+## Observability
+
+Ang ZeroClaw ay naglo-log sa `~/.zeroclaw/workspace/logs/` by default. Ang mga log ay nakaimbak by component:
+
+```
+~/.zeroclaw/workspace/logs/
+├── daemon.log # Daemon logs (startup, API requests, errors)
+├── agent.log # Agent logs (message routing, tool execution)
+├── telegram.log # Channel-specific logs (if enabled)
+└── matrix.log # Channel-specific logs (if enabled)
+```
+
+### Logging Configuration
+
+```toml
+[logging]
+level = "info" # debug, info, warn, error
+path = "~/.zeroclaw/workspace/logs/"
+rotation = "daily" # daily, hourly, size
+max_size_mb = 100 # For size-based rotation
+retention_days = 30 # Automatic purge after N days
+```
+
+Tingnan ang [Configuration Reference](docs/config-reference.md#logging) para sa lahat ng logging options.
+
+### Metrics (Planned)
+
+Prometheus metrics support para sa production monitoring ay coming soon. Tracking sa [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234).
+
+## Skills
+
+Sinusuportahan ng ZeroClaw ang custom skills — reusable modules na nag-e-extend sa system capabilities.
+
+### Skill Definition
+
+Ang mga skill ay nakaimbak sa `~/.zeroclaw/workspace/skills//` na may ganitong structure:
+
+```
+skills/
+└── my-skill/
+ ├── skill.toml # Skill metadata (name, description, dependencies)
+ ├── prompt.md # System prompt for the AI
+ └── tools/ # Optional custom tools
+ └── my_tool.py
+```
+
+### Skill Example
+
+```toml
+# skills/web-research/skill.toml
+[skill]
+name = "web-research"
+description = "Searches the web and summarizes results"
+version = "1.0.0"
+
+[dependencies]
+tools = ["web_fetch", "bash"]
+```
+
+```markdown
+
+
+You are a research assistant. When asked to research something:
+
+1. Use web_fetch to retrieve content
+2. Summarize results in an easy-to-read format
+3. Cite sources with URLs
+```
+
+### Skill Usage
+
+Ang mga skill ay automatically loaded sa agent startup. I-reference ang mga ito by name sa conversations:
+
+```
+User: Use the web-research skill to find the latest AI news
+Bot: [loads web-research skill, executes web_fetch, summarizes results]
+```
+
+Tingnan ang [Skills](#skills) section para sa kumpletong skill creation instructions.
+
+## Open Skills
+
+Sinusuportahan ng ZeroClaw ang [Open Skills](https://github.com/openagents-com/open-skills) — isang modular at provider-agnostic system para sa pag-extend sa AI agent capabilities.
+
+### Enable Open Skills
+
+```toml
+[skills]
+open_skills_enabled = true
+# open_skills_dir = "/path/to/open-skills" # optional
+```
+
+Maaari mo ring i-override sa runtime gamit ang `ZEROCLAW_OPEN_SKILLS_ENABLED` at `ZEROCLAW_OPEN_SKILLS_DIR`.
+
+## Development
+
+```bash
+cargo build # Dev build
+cargo build --release # Release build (codegen-units=1, works on all devices including Raspberry Pi)
+cargo build --profile release-fast # Faster build (codegen-units=8, requires 16 GB+ RAM)
+cargo test # Run full test suite
+cargo clippy --locked --all-targets -- -D clippy::correctness
+cargo fmt # Format
+
+# Run SQLite vs Markdown comparison benchmark
+cargo test --test memory_comparison -- --nocapture
+```
+
+### Pre-push hook
+
+Ang isang git hook ay nagpapatakbo ng `cargo fmt --check`, `cargo clippy -- -D warnings`, at `cargo test` bago ang bawat push. I-enable ito nang isang beses:
+
+```bash
+git config core.hooksPath .githooks
+```
+
+### Build Troubleshooting (OpenSSL errors on Linux)
+
+Kung makakita ka ng `openssl-sys` build error, i-sync ang dependencies at i-recompile gamit ang repository's lockfile:
+
+```bash
+git pull
+cargo build --release --locked
+cargo install --path . --force --locked
+```
+
+Ang ZeroClaw ay naka-configure na gumamit ng `rustls` para sa HTTP/TLS dependencies; ang `--locked` ay nagpapanatili sa transitive graph na deterministic sa clean environments.
+
+Para i-skip ang hook kapag kailangan mo ng quick push habang nagde-develop:
+
+```bash
+git push --no-verify
+```
+
+## Collaboration & Docs
+
+Magsimula sa documentation hub para sa task-based map:
+
+- Documentation Hub: [`docs/README.md`](docs/README.md)
+- Unified Docs TOC: [`docs/SUMMARY.md`](docs/SUMMARY.md)
+- Commands Reference: [`docs/commands-reference.md`](docs/commands-reference.md)
+- Configuration Reference: [`docs/config-reference.md`](docs/config-reference.md)
+- Providers Reference: [`docs/providers-reference.md`](docs/providers-reference.md)
+- Channels Reference: [`docs/channels-reference.md`](docs/channels-reference.md)
+- Operations Runbook: [`docs/operations-runbook.md`](docs/operations-runbook.md)
+- Troubleshooting: [`docs/troubleshooting.md`](docs/troubleshooting.md)
+- Docs Inventory/Classification: [`docs/docs-inventory.md`](docs/docs-inventory.md)
+- PR/Issue Triage Snapshot (as of Feb 18, 2026): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
+
+Mga pangunahing collaboration references:
+
+- Documentation Hub: [docs/README.md](docs/README.md)
+- Documentation Template: [docs/doc-template.md](docs/doc-template.md)
+- Documentation Change Checklist: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
+- Channel Configuration Reference: [docs/channels-reference.md](docs/channels-reference.md)
+- Matrix Encrypted Room Operations: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
+- Contributing Guide: [CONTRIBUTING.md](CONTRIBUTING.md)
+- PR Workflow Policy: [docs/pr-workflow.md](docs/pr-workflow.md)
+- Reviewer Playbook (triage + deep review): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
+- Ownership and CI Triage Map: [docs/ci-map.md](docs/ci-map.md)
+- Security Disclosure Policy: [SECURITY.md](SECURITY.md)
+
+Para sa deployment at runtime operations:
+
+- Network Deployment Guide: [docs/network-deployment.md](docs/network-deployment.md)
+- Proxy Agent Playbook: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
+
+## Suportahan ang ZeroClaw
+
+Kung tinutulungan ng ZeroClaw ang iyong trabaho at nais mong suportahan ang patuloy na development, maaari kang mag-donate dito:
+
+
+
+### 🙏 Special Thanks
+
+Isang taos-pusong pasasalamat sa mga komunidad at institusyon na nagbibigay-inspirasyon at nagpapakain sa open-source work na ito:
+
+- **Harvard University** — para sa pagpapaunlad ng intelektwal na kuryosidad at pagtulak sa mga hangganan ng kung ano ang posible.
+- **MIT** — para sa pagtatanggol ng open knowledge, open source, at ang paniniwala na ang teknolohiya ay dapat na accessible sa lahat.
+- **Sundai Club** — para sa komunidad, enerhiya, at ang walang-humpay na kagustuhang bumuo ng mga bagay na mahalaga.
+- **Ang Mundo at Higit Pa** 🌍✨ — sa bawat contributor, dreamer, at builder doon sa labas na gumagawa ng open source bilang isang puwersa para sa kabutihan. Ito ay para sa iyo.
+
+Kami ay bumubuo sa open source dahil ang mga pinakamahusay na ideya ay nagmumula sa lahat ng dako. Kung binabasa mo ito, ikaw ay bahagi nito. Maligayang pagdating. 🦀❤️
+
+## ⚠️ Official Repository at Impersonation Warning
+
+**Ito ang tanging opisyal na ZeroClaw repository:**
+
+>
+
+Ang anumang iba pang repository, organization, domain, o package na nagpapanggap na "ZeroClaw" o nagpapahiwatig ng affiliation sa ZeroClaw Labs ay **hindi awtorisado at hindi kaugnay sa proyektong ito**. Ang mga kilalang unauthorized forks ay ililista sa [TRADEMARK.md](TRADEMARK.md).
+
+Kung makakita ka ng impersonation o trademark misuse, mangyaring [magbukas ng isyu](https://github.com/zeroclaw-labs/zeroclaw/issues).
+
+---
+
+## License
+
+Ang ZeroClaw ay dual-licensed para sa maximum openness at contributor protection:
+
+| License | Use Cases |
+| ---------------------------- | ------------------------------------------------------------ |
+| [MIT](LICENSE-MIT) | Open-source, research, academic, personal use |
+| [Apache 2.0](LICENSE-APACHE) | Patent protection, institutional, commercial deployment |
+
+Maaari mong piliin ang alinmang license. **Ang mga contributor ay awtomatikong nagbibigay ng rights sa ilalim ng pareho** — tingnan ang [CLA.md](CLA.md) para sa kumpletong contributor agreement.
+
+### Trademark
+
+Ang pangalang **ZeroClaw** at logo ay mga rehistradong trademark ng ZeroClaw Labs. Ang license na ito ay hindi nagbibigay ng pahintulot na gamitin ang mga ito upang ipahiwatig ang endorsement o affiliation. Tingnan ang [TRADEMARK.md](TRADEMARK.md) para sa mga allowed at prohibited uses.
+
+### Contributor Protections
+
+- **Mo namang pinapanatili** ang copyright ng iyong mga kontribusyon
+- **Patent grant** (Apache 2.0) ay nagpoprotekta sa iyo laban sa patent claims ng ibang mga contributor
+- Ang iyong mga kontribusyon ay **permanenteng naa-attributed** sa commit history at [NOTICE](NOTICE)
+- Walang trademark rights ang naililipat sa pamamagitan ng pagko-contribute
+
+## Mag-contribute
+
+Tingnan ang [CONTRIBUTING.md](CONTRIBUTING.md) at [CLA.md](CLA.md). Mag-implement ng isang trait, mag-submit ng PR:
+
+- CI workflow guide: [docs/ci-map.md](docs/ci-map.md)
+- Bagong `Provider` → `src/providers/`
+- Bagong `Channel` → `src/channels/`
+- Bagong `Observer` → `src/observability/`
+- Bagong `Tool` → `src/tools/`
+- Bagong `Memory` → `src/memory/`
+- Bagong `Tunnel` → `src/tunnel/`
+- Bagong `Skill` → `~/.zeroclaw/workspace/skills//`
+
+---
+
+**ZeroClaw** — Zero overhead. Zero compromise. Deploy anywhere. Swap anything. 🦀
+
+## Star History
+
+
diff --git a/README.tr.md b/README.tr.md
new file mode 100644
index 000000000..67bf481bd
--- /dev/null
+++ b/README.tr.md
@@ -0,0 +1,914 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Sıfırı aşırı yok. Sıfır ödün ver yok. %100 Rust. %100 Agnostik.
+ ⚡️ $10 donanımla <5MB RAM ile çalışır: OpenClaw'dan %99 daha az bellek ve Mac mini'den %98 daha ucuz!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Harvard, MIT ve Sundai.Club topluluklarının öğrencileri ve üyeleri tarafından inşa edilmiştir.
+
+ Hızlı, hafif ve tamamen otonom AI asistan altyapısı
+ Her yerde dağıtın. Her şeyi değiştirin.
+
+
+
+ ZeroClaw, ajan iş akışları için çalışma zamanı işletim sistemidir — modelleri, araçları, belleği ve yürütmeyi soyutlayan, ajanları bir kez oluşturup ve her yerde çalıştıran bir altyapıdır.
+
+
+
Trait tabanlı mimari · varsayılan olarak güvenli çalışma zamanı · değiştirilebilir sağlayıcı/kanal/araç · her şey eklenebilir
+
+### 📢 Duyurular
+
+Önemli duyurular için bu tabloyu kullanın (uyumluluk değişiklikleri, güvenlik bildirimleri, bakım pencereleri ve sürüm engellemeleri).
+
+| Tarih (UTC) | Seviye | Duyuru | Eylem |
+| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 2026-02-19 | _Kritik_ | **`openagen/zeroclaw` veya `zeroclaw.org` ile bağlantılı değiliz.** `zeroclaw.org` alanı şu anda `openagen/zeroclaw` fork'una işaret ediyor ve bu alan/depo taklitçiliğini yapıyor. | Bu kaynaklardan bilgi, ikili dosyalar, bağış toplama veya duyurulara güvenmeyin. Sadece [bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) ve doğrulanmış sosyal medya hesaplarımızı kullanın. |
+| 2026-02-21 | _Önemli_ | Resmi web sitemiz artık çevrimiçi: [zeroclawlabs.ai](https://zeroclawlabs.ai). Bekleme sürecinde sabırlarınız için teşekkürler. Hala taklit girişimleri tespit ediyoruz: ZeroClaw adına resmi kanallarımız aracılığıyla yayınlanmayan herhangi bir yatırım/bağış faaliyetine katılmayın. | [Bu depoyu](https://github.com/zeroclaw-labs/zeroclaw) tek doğruluk kaynağı olarak kullanın. Resmi güncellemeler için [X (@zeroclawlabs)](https://x.com/zeroclawlabs?s=21), [Telegram (@zeroclawlabs)](https://t.me/zeroclawlabs), [Facebook (grup)](https://www.facebook.com/groups/zeroclaw), [Reddit (r/zeroclawlabs)](https://www.reddit.com/r/zeroclawlabs/) ve [Xiaohongshu](https://www.xiaohongshu.com/user/profile/67cbfc43000000000d008307?xsec_token=AB73VnYnGNx5y36EtnnZfGmAmS-6Wzv8WMuGpfwfkg6Yc%3D&xsec_source=pc_search)'u takip edin. |
+| 2026-02-19 | _Önemli_ | Anthropic, 2026-02-19 tarihinde kimlik doğrulama ve kimlik bilgileri kullanım şartlarını güncelledi. OAuth kimlik doğrulaması (Free, Pro, Max) yalnızca Claude Code ve Claude.ai içindir; Claude Free/Pro/Max OAuth belirteçlerini başka herhangi bir ürün, araç veya hizmette (Agent SDK dahil) kullanmak yasaktır ve Tüketici Kullanım Şartlarını ihlal edebilir. | Olası kayıpları önlemek için lütfen geçici olarak Claude Code OAuth entegrasyonlarından kaçının. Orijinal madde: [Authentication and Credential Use](https://code.claude.com/docs/en/legal-and-compliance#authentication-and-credential-use). |
+
+### ✨ Özellikler
+
+- 🏎️ **Varsayılan Hafif Çalışma Zamanı:** Yaygın CLI iş akışları ve durum komutları üretim derlemelerinde birkaç megabaytlık bellek alanında çalışır.
+- 💰 **Maliyet Etkin Dağıtım:** Ağır çalışma zamanı bağımlılıkları olmadan düşük maliyetli kartlar ve küçük bulut örnekleri için tasarlanmıştır.
+- 💡 **Hızlı Soğuk Başlangıçlar:** Tek ikili Rust çalışma zamanı, komut ve arka plan programı başlatmalarını günlük operasyonlar için neredeyse anlık tutar.
+- 🌍 **Taşınabilir Mimari:** Değiştirilebilir sağlayıcı/kanal/araç ile ARM, x86 ve RISC-V üzerinde tek ikili iş akışı.
+
+### Neden ekipler ZeroClaw'ı seçiyor
+
+- **Varsayılan hafif:** küçük Rust ikilisi, hızlı başlangıç, düşük bellek ayak izi.
+- **Tasarıma göre güvenli:** eşleştirme, katı kum alanı, açık izin listeleri, çalışma alanı kapsamı.
+- **Tamamen değiştirilebilir:** çekirdek sistemler trait'tir (sağlayıcılar, kanallar, araçlar, bellek, tüneller).
+- **Satıcı kilitlenmesi yok:** OpenAI uyumlu sağlayıcı desteği + eklenebilir özel uç noktalar.
+
+## Kıyaslama Anlık Görüntüsü (ZeroClaw vs OpenClaw, Tekrarlanabilir)
+
+Yerel makinede hızlı kıyaslama (macOS arm64, Şub. 2026) 0.8 GHz uç donanımı için normalize edilmiş.
+
+| | OpenClaw | NanoBot | PicoClaw | ZeroClaw 🦀 |
+| ---------------------------- | ------------- | -------------- | --------------- | --------------------- |
+| **Dil** | TypeScript | Python | Go | **Rust** |
+| **RAM** | > 1 GB | > 100 MB | < 10 MB | **< 5 MB** |
+| **Başlangıç (0.8 GHz çekirdek)** | > 500s | > 30s | < 1s | **< 10ms** |
+| **İkili Boyut** | ~28 MB (dist) | Yok (Betikler) | ~8 MB | **3.4 MB** |
+| **Maliyet** | Mac Mini $599 | Linux SBC ~$50 | Linux kart $10 | **Herhangi bir donanım $10** |
+
+> Notlar: ZeroClaw sonuçları `/usr/bin/time -l` kullanılarak üretim derlemelerinde ölçülür. OpenClaw Node.js çalışma zamanı gerektirir (tipik olarak ~390 MB ek bellek yükü), NanoBot ise Python çalışma zamanı gerektirir. PicoClaw ve ZeroClaw statik ikililerdir. Yukarıdaki RAM rakamları çalışma zamanı belleğidir; derleme zamanı derleme gereksinimleri daha yüksektir.
+
+
+
+
+
+### Tekrarlanabilir Yerel Ölçüm
+
+Kıyaslama iddiaları kod ve araç zincirleri geliştikçe değişebilir, bu yüzden her zaman mevcut derlemenizi yerel olarak ölçün:
+
+```bash
+cargo build --release
+ls -lh target/release/zeroclaw
+
+/usr/bin/time -l target/release/zeroclaw --help
+/usr/bin/time -l target/release/zeroclaw status
+```
+
+Örnek numune (macOS arm64, 18 Şubat 2026'da ölçüldü):
+
+- Sürüm ikili boyutu: `8.8M`
+- `zeroclaw --help`: gerçek süre yaklaşık `0.02s`, en büyük bellek ayak izi ~`3.9 MB`
+- `zeroclaw status`: gerçek süre yaklaşık `0.01s`, en büyük bellek ayak izi ~`4.1 MB`
+
+## Ön Koşullar
+
+
+Windows
+
+### Windows — Gerekli
+
+1. **Visual Studio Build Tools** (MSVC bağlayıcısını ve Windows SDK'yı sağlar):
+
+ ```powershell
+ winget install Microsoft.VisualStudio.2022.BuildTools
+ ```
+
+ Kurulum sırasında (veya Visual Studio Installer aracılığıyla), **"C++ ile Masaüstü Geliştirme"** iş yükünü seçin.
+
+2. **Rust Araç Zinciri:**
+
+ ```powershell
+ winget install Rustlang.Rustup
+ ```
+
+ Kurulumdan sonra, yeni bir terminal açın ve kararlı araç zincirinin aktif olduğundan emin olmak için `rustup default stable` çalıştırın.
+
+3. **Doğrulayın** ikisinin de çalıştığını:
+ ```powershell
+ rustc --version
+ cargo --version
+ ```
+
+### Windows — İsteğe Bağlı
+
+- **Docker Desktop** — yalnızca [Docker kum alanlı çalışma zamanı](#mevcut-çalışma-zamanı-desteği) kullanıyorsanız gereklidir (`runtime.kind = "docker"`). `winget install Docker.DockerDesktop` aracılığıyla yükleyin.
+
+
+
+
+Linux / macOS
+
+### Linux / macOS — Gerekli
+
+1. **Temel derleme araçları:**
+ - **Linux (Debian/Ubuntu):** `sudo apt install build-essential pkg-config`
+ - **Linux (Fedora/RHEL):** `sudo dnf group install development-tools && sudo dnf install pkg-config`
+ - **macOS:** Xcode Command Line Tools'u yükleyin: `xcode-select --install`
+
+2. **Rust Araç Zinciri:**
+
+ ```bash
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+ ```
+
+ Detaylar için [rustup.rs](https://rustup.rs) adresine bakın.
+
+3. **Doğrulayın:**
+ ```bash
+ rustc --version
+ cargo --version
+ ```
+
+### Linux / macOS — İsteğe Bağlı
+
+- **Docker** — yalnızca [Docker kum alanlı çalışma zamanı](#mevcut-çalışma-zamanı-desteği) kullanıyorsanız gereklidir (`runtime.kind = "docker"`).
+ - **Linux (Debian/Ubuntu):** [docs.docker.com](https://docs.docker.com/engine/install/ubuntu/) adresine bakın
+ - **Linux (Fedora/RHEL):** [docs.docker.com](https://docs.docker.com/engine/install/fedora/) adresine bakın
+ - **macOS:** [docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop/) adresinden Docker Desktop'u yükleyin
+
+
+
+## Hızlı Başlangıç
+
+### Seçenek 1: Otomatik kurulum (önerilen)
+
+`bootstrap.sh` betiği Rust'u yükler, ZeroClaw'ı klonlar, derler ve ilk geliştirme ortamınızı ayarlar:
+
+```bash
+curl -fsSL https://raw.githubusercontent.com/zeroclaw-labs/zeroclaw/main/bootstrap.sh | bash
+```
+
+Bu işlem:
+
+1. Rust'u yükler (yoksa)
+2. ZeroClaw deposunu klonlar
+3. ZeroClaw'ı sürüm modunda derler
+4. `zeroclaw`'ı `~/.cargo/bin/`e yükler
+5. `~/.zeroclaw/workspace/` içinde varsayılan çalışma alanı yapısını oluşturur
+6. Başlangıç `~/.zeroclaw/workspace/config.toml` yapılandırma dosyasını üretir
+
+Önyüklemeden sonra, `zeroclaw` komutunu global olarak kullanmak için kabuğunuzu yeniden yükleyin veya `source ~/.cargo/env` çalıştırın.
+
+### Seçenek 2: Manuel kurulum
+
+
+Manuel kurulum adımlarını görmek için tıklayın
+
+```bash
+# 1. Depoyu klonla
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# 2. Sürüm olarak derle
+cargo build --release --locked
+
+# 3. İkiliyi yükle
+cargo install --path . --locked
+
+# 4. Çalışma alanını başlat
+zeroclaw init
+
+# 5. Kurulumu doğrula
+zeroclaw --version
+zeroclaw status
+```
+
+
+
+### Kurulumdan Sonra
+
+Kurulumdan sonra (önyükleme veya manuel olarak), şunları görmelisiniz:
+
+```
+~/.zeroclaw/workspace/
+├── config.toml # Ana yapılandırma
+├── .pairing # Eşleştirme sırları (ilk başlangıçta oluşturulur)
+├── logs/ # Arka plan programı/ajan logları
+├── skills/ # Özel beceriler
+└── memory/ # Konuşma bağlamı depolaması
+```
+
+**Sonraki adımlar:**
+
+1. AI sağlayıcılarınızı `~/.zeroclaw/workspace/config.toml` içinde yapılandırın
+2. Gelişmiş seçenekler için [yapılandırma referansına](docs/config-reference.md) bakın
+3. Ajanı başlatın: `zeroclaw agent start`
+4. Tercih ettiğiniz kanal üzerinden test edin ([kanallar referansına](docs/channels-reference.md) bakın)
+
+## Yapılandırma
+
+Sağlayıcıları, kanalları ve sistem davranışını yapılandırmak için `~/.zeroclaw/workspace/config.toml` dosyasını düzenleyin.
+
+### Hızlı Yapılandırma Referansı
+
+```toml
+[providers.anthropic]
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+api_key = "sk-..."
+model = "gpt-4o"
+
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@bot:matrix.org"
+password = "..."
+
+[memory]
+kind = "markdown" # veya "sqlite" veya "none"
+
+[runtime]
+kind = "native" # veya "docker" (Docker gerektirir)
+```
+
+**Tam referans belgeleri:**
+
+- [Yapılandırma Referansı](docs/config-reference.md) — tüm ayarlar, doğrulamalar, varsayılanlar
+- [Sağlayıcı Referansı](docs/providers-reference.md) — AI sağlayıcıya özgü yapılandırmalar
+- [Kanallar Referansı](docs/channels-reference.md) — Telegram, Matrix, Slack, Discord ve daha fazlası
+- [Operasyonlar](docs/operations-runbook.md) — üretim izleme, sırları döndürme, ölçeklendirme
+
+### Mevcut Çalışma Zamanı Desteği
+
+ZeroClaw iki kod yürütme arka ucu destekler:
+
+- **`native`** (varsayılan) — doğrudan süreç yürütme, en hızlı yol, güvenilir ortamlar için ideal
+- **`docker`** — tam konteyner yalıtımı. sertleştirilmiş güvenlik ilkeleri. Docker gerektirir
+
+Katı kum alanı veya ağ yalıtımı gerekiyorsa `runtime.kind = "docker"` kullanın. Tam detaylar için [yapılandırma referansına](docs/config-reference.md#runtime) bakın.
+
+## Komutlar
+
+```bash
+# Çalışma alanı yönetimi
+zeroclaw init # Yeni bir çalışma alanı başlatır
+zeroclaw status # Arka plan programı/ajan durumunu gösterir
+zeroclaw config validate # config.toml sözdizimini ve değerlerini doğrular
+
+# Arka plan programı yönetimi
+zeroclaw daemon start # Arka plan programını arka planda başlatır
+zeroclaw daemon stop # Çalışan arka plan programını durdurur
+zeroclaw daemon restart # Arka plan programını yeniden başlatır (yapılandırmayı yeniden yükler)
+zeroclaw daemon logs # Arka plan programı loglarını gösterir
+
+# Ajan yönetimi
+zeroclaw agent start # Ajanı başlatır (çalışan arka plan programı gerektirir)
+zeroclaw agent stop # Ajanı durdurur
+zeroclaw agent restart # Ajanı yeniden başlatır (yapılandırmayı yeniden yükler)
+
+# Eşleştirme operasyonları
+zeroclaw pairing init # Yeni bir eşleştirme sırrı oluşturur
+zeroclaw pairing rotate # Mevcut eşleştirme sırrını döndürür
+
+# Tünelleme (herkese açık kullanım için)
+zeroclaw tunnel start # Yerel arka plan programına bir tünel başlatır
+zeroclaw tunnel stop # Aktif tüneli durdurur
+
+# Teşhis
+zeroclaw doctor # Sistem sağlık kontrollerini çalıştırır
+zeroclaw version # Sürüm ve derleme bilgilerini gösterir
+```
+
+Tam seçenekler ve örnekler için [Komutlar Referansına](docs/commands-reference.md) bakın.
+
+## Mimari
+
+```
+┌─────────────────────────────────────────────────────────────────┐
+│ Kanallar (trait) │
+│ Telegram │ Matrix │ Slack │ Discord │ Web │ CLI │ Özel │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Ajan Orkestratörü │
+│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
+│ │ Mesaj │ │ Bağlam │ │ Araç │ │
+│ │ Yönlendirme│ │ Bellek │ │ Yürütme │ │
+│ └──────────────┘ └──────────────┘ └──────────────┘ │
+└─────────────────────────┬───────────────────────────────────────┘
+ │
+ ┌───────────────┼───────────────┐
+ ▼ ▼ ▼
+┌──────────────┐ ┌──────────────┐ ┌──────────────┐
+│ Sağlayıcılar│ │ Bellek │ │ Araçlar │
+│ (trait) │ │ (trait) │ │ (trait) │
+├──────────────┤ ├──────────────┤ ├──────────────┤
+│ Anthropic │ │ Markdown │ │ Filesystem │
+│ OpenAI │ │ SQLite │ │ Bash │
+│ Gemini │ │ Yok │ │ Web Fetch │
+│ Ollama │ │ Özel │ │ Özel │
+│ Özel │ └──────────────┘ └──────────────┘
+└──────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Çalışma Zamanı (trait) │
+│ Native │ Docker │
+└─────────────────────────────────────────────────────────────────┘
+```
+
+**Temel ilkeler:**
+
+- Her şey bir **trait'tir** — sağlayıcılar, kanallar, araçlar, bellek, tüneller
+- Kanallar orkestratörü çağırır; orkestratör sağlayıcıları + araçları çağırır
+- Bellek sistemi konuşma bağlamını yönetir (markdown, SQLite veya yok)
+- Çalışma zamanı kod yürütmeyi soyutlar (yerel veya Docker)
+- Satıcı kilitlenmesi yok — kod değişikliği olmadan Anthropic ↔ OpenAI ↔ Gemini ↔ Ollama değiştirin
+
+Detaylı diyagramlar ve uygulama detayları için [mimari belgelerine](docs/architecture.svg) bakın.
+
+## Örnekler
+
+### Telegram Bot
+
+```toml
+[channels.telegram]
+enabled = true
+bot_token = "123456:ABC-DEF..."
+allowed_users = [987654321] # Telegram kullanıcı ID'niz
+```
+
+Arka plan programını + ajanı başlatın, ardından Telegram'da botunuza bir mesaj gönderin:
+
+```
+/start
+Merhaba! Bir Python betiği yazmama yardımcı olabilir misin?
+```
+
+Bot, AI tarafından oluşturulan kodla yanıt verir, istenirse araçları yürütür ve konuşma bağlamını korur.
+
+### Matrix (uçtan uca şifreleme)
+
+```toml
+[channels.matrix]
+enabled = true
+homeserver_url = "https://matrix.org"
+username = "@zeroclaw:matrix.org"
+password = "..."
+device_name = "zeroclaw-prod"
+e2ee_enabled = true
+```
+
+Şifreli bir odaya `@zeroclaw:matrix.org` davet edin ve bot tam şifrelemeyle yanıt verecektir. Cihaz doğrulama kurulumu için [Matrix E2EE Kılavuzuna](docs/matrix-e2ee-guide.md) bakın.
+
+### Çoklu-Sağlayıcı
+
+```toml
+[providers.anthropic]
+enabled = true
+api_key = "sk-ant-..."
+model = "claude-sonnet-4-20250514"
+
+[providers.openai]
+enabled = true
+api_key = "sk-..."
+model = "gpt-4o"
+
+[orchestrator]
+default_provider = "anthropic"
+fallback_providers = ["openai"] # Sağlayıcı hatasında geçiş
+```
+
+Anthropic başarısız olursa veya hız sınırına ulaşırsa, orkestratör otomatik olarak OpenAI'ya geçer.
+
+### Özel Bellek
+
+```toml
+[memory]
+kind = "sqlite"
+path = "~/.zeroclaw/workspace/memory/conversations.db"
+retention_days = 90 # 90 gün sonra otomatik temizleme
+```
+
+Veya insan tarafından okunabilir depolama için Markdown kullanın:
+
+```toml
+[memory]
+kind = "markdown"
+path = "~/.zeroclaw/workspace/memory/"
+```
+
+Tüm bellek seçenekleri için [Yapılandırma Referansına](docs/config-reference.md#memory) bakın.
+
+## Sağlayıcı Desteği
+
+| Sağlayıcı | Durum | API Anahtarı | Örnek Modeller |
+| ----------------- | ----------- | ------------------- | ---------------------------------------------------- |
+| **Anthropic** | ✅ Kararlı | `ANTHROPIC_API_KEY` | `claude-sonnet-4-20250514`, `claude-opus-4-20250514` |
+| **OpenAI** | ✅ Kararlı | `OPENAI_API_KEY` | `gpt-4o`, `gpt-4o-mini`, `o1`, `o1-mini` |
+| **Google Gemini** | ✅ Kararlı | `GOOGLE_API_KEY` | `gemini-2.0-flash-exp`, `gemini-exp-1206` |
+| **Ollama** | ✅ Kararlı | Yok (yerel) | `llama3.3`, `qwen2.5`, `phi4` |
+| **Cerebras** | ✅ Kararlı | `CEREBRAS_API_KEY` | `llama-3.3-70b` |
+| **Groq** | ✅ Kararlı | `GROQ_API_KEY` | `llama-3.3-70b-versatile` |
+| **Mistral** | 🚧 Planlanan | `MISTRAL_API_KEY` | TBD |
+| **Cohere** | 🚧 Planlanan | `COHERE_API_KEY` | TBD |
+
+### Özel Uç Noktalar
+
+ZeroClaw, OpenAI uyumlu uç noktaları destekler:
+
+```toml
+[providers.custom]
+enabled = true
+api_key = "..."
+base_url = "https://api.your-llm-provider.com/v1"
+model = "your-model-name"
+```
+
+Örnek: herhangi bir LLM'ye OpenAI arayüzü üzerinden erişmek için [LiteLLM](https://github.com/BerriAI/litellm)'i proxy olarak kullanın.
+
+Tam yapılandırma detayları için [Sağlayıcı Referansına](docs/providers-reference.md) bakın.
+
+## Kanal Desteği
+
+| Kanal | Durum | Kimlik Doğrulama | Notlar |
+| ------------ | ----------- | ------------------------ | --------------------------------------------------------- |
+| **Telegram** | ✅ Kararlı | Bot Token | Dosyalar, resimler, satır içi düğmeler dahil tam destek |
+| **Matrix** | ✅ Kararlı | Şifre veya Token | Cihaz doğrulamalı E2EE desteği |
+| **Slack** | 🚧 Planlanan | OAuth veya Bot Token | Çalışma alanı erişimi gerektirir |
+| **Discord** | 🚧 Planlanan | Bot Token | Guild izinleri gerektirir |
+| **WhatsApp** | 🚧 Planlanan | Twilio veya resmi API | İş hesabı gerektirir |
+| **CLI** | ✅ Kararlı | Yok | Doğrudan konuşma arayüzü |
+| **Web** | 🚧 Planlanan | API Anahtarı veya OAuth | Tarayıcı tabanlı sohbet arayüzü |
+
+Tam yapılandırma talimatları için [Kanallar Referansına](docs/channels-reference.md) bakın.
+
+## Araç Desteği
+
+ZeroClaw, kod yürütme, dosya sistemi erişimi ve web alımı için yerleşik araçlar sağlar:
+
+| Araç | Açıklama | Gerekli Çalışma Zamanı |
+| -------------------- | --------------------------- | ----------------------------- |
+| **bash** | Shell komutlarını yürüt | Yerel veya Docker |
+| **python** | Python betiklerini yürüt | Python 3.8+ (yerel) veya Docker |
+| **javascript** | Node.js kodunu yürüt | Node.js 18+ (yerel) veya Docker |
+| **filesystem_read** | Dosyaları oku | Yerel veya Docker |
+| **filesystem_write** | Dosyaları yaz | Yerel veya Docker |
+| **web_fetch** | Web içeriği al | Yerel veya Docker |
+
+### Yürütme Güvenliği
+
+- **Yerel Çalışma Zamanı** — arka plan programının kullanıcı süreci olarak çalışır, tam dosya sistemi erişimi
+- **Docker Çalışma Zamanı** — tam konteyner yalıtımı, ayrı dosya sistemleri ve ağlar
+
+`config.toml` içinde yürütme ilkesini yapılandırın:
+
+```toml
+[runtime]
+kind = "docker"
+allowed_tools = ["bash", "python", "filesystem_read"] # Açık izin listesi
+```
+
+Tam güvenlik seçenekleri için [Yapılandırma Referansına](docs/config-reference.md#runtime) bakın.
+
+## Dağıtım
+
+### Yerel Dağıtım (Geliştirme)
+
+```bash
+zeroclaw daemon start
+zeroclaw agent start
+```
+
+### Sunucu Dağıtımı (Üretim)
+
+Arka plan programını ve ajanı hizmet olarak yönetmek için systemd kullanın:
+
+```bash
+# İkiliyi yükle
+cargo install --path . --locked
+
+# Çalışma alanını yapılandır
+zeroclaw init
+
+# systemd hizmet dosyaları oluştur
+sudo cp deployment/systemd/zeroclaw-daemon.service /etc/systemd/system/
+sudo cp deployment/systemd/zeroclaw-agent.service /etc/systemd/system/
+
+# Hizmetleri etkinleştir ve başlat
+sudo systemctl enable zeroclaw-daemon zeroclaw-agent
+sudo systemctl start zeroclaw-daemon zeroclaw-agent
+
+# Durumu doğrula
+sudo systemctl status zeroclaw-daemon
+sudo systemctl status zeroclaw-agent
+```
+
+Tam üretim dağıtım talimatları için [Ağ Dağıtımı Kılavuzuna](docs/network-deployment.md) bakın.
+
+### Docker
+
+```bash
+# İmajı oluştur
+docker build -t zeroclaw:latest .
+
+# Konteyneri çalıştır
+docker run -d \
+ --name zeroclaw \
+ -v ~/.zeroclaw/workspace:/workspace \
+ -e ANTHROPIC_API_KEY=sk-ant-... \
+ zeroclaw:latest
+```
+
+Derleme detayları ve yapılandırma seçenekleri için [`Dockerfile`](Dockerfile)'a bakın.
+
+### Uç Donanım
+
+ZeroClaw, düşük güç tüketimli donanımda çalışmak üzere tasarlanmıştır:
+
+- **Raspberry Pi Zero 2 W** — ~512 MB RAM, tek ARMv8 çekirdek, < $5 donanım maliyeti
+- **Raspberry Pi 4/5** — 1 GB+ RAM, çok çekirdekli, eşzamanlı iş yükleri için ideal
+- **Orange Pi Zero 2** — ~512 MB RAM, dört çekirdekli ARMv8, ultra düşük maliyet
+- **x86 SBC'ler (Intel N100)** — 4-8 GB RAM, hızlı derlemeler, yerel Docker desteği
+
+Cihaza özgü kurulum talimatları için [Donanım Kılavuzuna](docs/hardware/README.md) bakın.
+
+## Tünelleme (Herkese Açık Kullanım)
+
+Yerel ZeroClaw arka plan programınızı güvenli tüneller aracılığıyla herkese açık ağa çıkarın:
+
+```bash
+zeroclaw tunnel start --provider cloudflare
+```
+
+Desteklenen tünel sağlayıcıları:
+
+- **Cloudflare Tunnel** — ücretsiz HTTPS, port açığa çıkarma yok, çoklu etki alanı desteği
+- **Ngrok** — hızlı kurulum, özel etki alanları (ücretli plan)
+- **Tailscale** — özel mesh ağı. herkese açık port yok
+
+Tam yapılandırma seçenekleri için [Yapılandırma Referansına](docs/config-reference.md#tunnel) bakın.
+
+## Güvenlik
+
+ZeroClaw birden çok güvenlik katmanı uygular:
+
+### Eşleştirme
+
+Arka plan programı ilk başlangıçta `~/.zeroclaw/workspace/.pairing` içinde saklanan bir eşleştirme sırrı oluşturur. İstemciler (ajan, CLI) bağlanmak için bu sırrı sunmalıdır.
+
+```bash
+zeroclaw pairing rotate # Yeni bir sır oluşturur ve eskisini geçersiz kılar
+```
+
+### Kum Alanı
+
+- **Docker Çalışma Zamanı** — ayrı dosya sistemleri ve ağlarla tam konteyner yalıtımı
+- **Yerel Çalışma Zamanı** — kullanıcı süreci olarak çalışır. varsayılan olarak çalışma alanına kapsamlı
+
+### İzin Listeleri
+
+Kanallar kullanıcı ID'sine göre erişimi kısıtlayabilir:
+
+```toml
+[channels.telegram]
+enabled = true
+allowed_users = [123456789, 987654321] # Açık izin listesi
+```
+
+### Şifreleme
+
+- **Matrix E2EE** — cihaz doğrulamalı tam uçtan uca şifreleme
+- **TLS Taşıma** — tüm API ve tünel trafiği HTTPS/TLS kullanır
+
+Tam ilkeler ve uygulamalar için [Güvenlik Belgelerine](docs/security/README.md) bakın.
+
+## Gözlemlenebilirlik
+
+ZeroClaw varsayılan olarak `~/.zeroclaw/workspace/logs/` dizinine log yazar. Loglar bileşene göre saklanır:
+
+```
+~/.zeroclaw/workspace/logs/
+├── daemon.log # Arka plan programı logları (başlangıç, API istekleri, hatalar)
+├── agent.log # Ajan logları (mesaj yönlendirme, araç yürütme)
+├── telegram.log # Kanala özgü loglar (etkinse)
+└── matrix.log # Kanala özgü loglar (etkinse)
+```
+
+### Loglama Yapılandırması
+
+```toml
+[logging]
+level = "info" # debug, info, warn, error
+path = "~/.zeroclaw/workspace/logs/"
+rotation = "daily" # günlük, saatlik, boyut
+max_size_mb = 100 # Boyut tabanlı döndürme için
+retention_days = 30 # N gün sonra otomatik temizleme
+```
+
+Tüm loglama seçenekleri için [Yapılandırma Referansına](docs/config-reference.md#logging) bakın.
+
+### Metrikler (Planlanan)
+
+Üretim izleme için Prometheus metrikleri desteği yakında geliyor. [#234](https://github.com/zeroclaw-labs/zeroclaw/issues/234) numaralı konuda takip ediliyor.
+
+## Beceriler
+
+ZeroClaw, sistem yeteneklerini genişleten yeniden kullanılabilir modüller olan özel becerileri destekler.
+
+### Beceri Tanımı
+
+Beceriler bu yapı ile `~/.zeroclaw/workspace/skills//` içinde saklanır:
+
+```
+skills/
+└── my-skill/
+ ├── skill.toml # Beceri metaverileri (ad, açıklama, bağımlılıklar)
+ ├── prompt.md # AI için sistem istemi
+ └── tools/ # İsteğe bağlı özel araçlar
+ └── my_tool.py
+```
+
+### Beceri Örneği
+
+```toml
+# skills/web-research/skill.toml
+[skill]
+name = "web-research"
+description = "Web'de arama yapar ve sonuçları özetler"
+version = "1.0.0"
+
+[dependencies]
+tools = ["web_fetch", "bash"]
+```
+
+```markdown
+
+
+Sen bir araştırma asistanısın. Bir şeyi araştırmam istendiğinde:
+
+1. İçeriği almak için web_fetch kullan
+2. Sonuçları okunması kolay bir biçimde özetle
+3. Kaynakları URL'lerle göster
+```
+
+### Beceri Kullanımı
+
+Beceriler ajan başlangıcında otomatik olarak yüklenir. Konuşmalarda ada göre başvurun:
+
+```
+Kullanıcı: En son AI haberlerini bulmak için web-research becerisini kullan
+Bot: [web-research becerisini yükler, web_fetch'i yürütür, sonuçları özetler]
+```
+
+Tam beceri oluşturma talimatları için [Beceriler](#beceriler) bölümüne bakın.
+
+## Open Skills
+
+ZeroClaw, AI ajan yeteneklerini genişletmek için modüler ve sağlayıcıdan bağımsız bir sistem olan [Open Skills](https://github.com/openagents-com/open-skills)'i destekler.
+
+### Open Skills'i Etkinleştir
+
+```toml
+[skills]
+open_skills_enabled = true
+# open_skills_dir = "/path/to/open-skills" # isteğe bağlı
+```
+
+Ayrıca `ZEROCLAW_OPEN_SKILLS_ENABLED` ve `ZEROCLAW_OPEN_SKILLS_DIR` ile çalışma zamanında geçersiz kılabilirsiniz.
+
+## Geliştirme
+
+```bash
+cargo build # Geliştirme derlemesi
+cargo build --release # Sürüm derlemesi (codegen-units=1, Raspberry Pi dahil tüm cihazlarda çalışır)
+cargo build --profile release-fast # Daha hızlı derleme (codegen-units=8, 16 GB+ RAM gerektirir)
+cargo test # Tam test paketini çalıştır
+cargo clippy --locked --all-targets -- -D clippy::correctness
+cargo fmt # Biçimlendir
+
+# SQLite vs Markdown karşılaştırma kıyaslamasını çalıştır
+cargo test --test memory_comparison -- --nocapture
+```
+
+### Ön push kancası
+
+Bir git kancası her push'tan önce `cargo fmt --check`, `cargo clippy -- -D warnings` ve `cargo test` çalıştırır. Bir kez etkinleştirin:
+
+```bash
+git config core.hooksPath .githooks
+```
+
+### Derleme Sorun Giderme (Linux'ta OpenSSL hataları)
+
+Bir `openssl-sys` derleme hatasıyla karşılaşırsanız, bağımlılıkları eşzamanlayın ve deponun lockfile'ı ile yeniden derleyin:
+
+```bash
+git pull
+cargo build --release --locked
+cargo install --path . --force --locked
+```
+
+ZeroClaw, HTTP/TLS bağımlılıkları için `rustls` kullanacak şekilde yapılandırılmıştır; `--locked`, geçişli grafiği temiz ortamlarda deterministik tutar.
+
+Geliştirme sırasında hızlı bir push'a ihtiyacınız olduğunda kancayı atlamak için:
+
+```bash
+git push --no-verify
+```
+
+## İşbirliği ve Belgeler
+
+Görev tabanlı bir harita için belge merkeziyle başlayın:
+
+- Belge Merkezi: [`docs/README.md`](docs/README.md)
+- Birleşik Docs İçindekiler: [`docs/SUMMARY.md`](docs/SUMMARY.md)
+- Komutlar Referansı: [`docs/commands-reference.md`](docs/commands-reference.md)
+- Yapılandırma Referansı: [`docs/config-reference.md`](docs/config-reference.md)
+- Sağlayıcı Referansı: [`docs/providers-reference.md`](docs/providers-reference.md)
+- Kanallar Referansı: [`docs/channels-reference.md`](docs/channels-reference.md)
+- Operasyonlar Runbook'u: [`docs/operations-runbook.md`](docs/operations-runbook.md)
+- Sorun Giderme: [`docs/troubleshooting.md`](docs/troubleshooting.md)
+- Docs Envanteri/Sınıflandırma: [`docs/docs-inventory.md`](docs/docs-inventory.md)
+- PR/Issue Triaj Anlık Görüntüsü (18 Şub. 2026 itibariyle): [`docs/project-triage-snapshot-2026-02-18.md`](docs/project-triage-snapshot-2026-02-18.md)
+
+Ana işbirliği referansları:
+
+- Belge Merkezi: [docs/README.md](docs/README.md)
+- Belge Şablonu: [docs/doc-template.md](docs/doc-template.md)
+- Belge Değişikliği Kontrol Listesi: [docs/README.md#4-documentation-change-checklist](docs/README.md#4-documentation-change-checklist)
+- Kanal Yapılandırma Referansı: [docs/channels-reference.md](docs/channels-reference.md)
+- Matrix Şifreli Oda Operasyonları: [docs/matrix-e2ee-guide.md](docs/matrix-e2ee-guide.md)
+- Katkı Kılavuzu: [CONTRIBUTING.md](CONTRIBUTING.md)
+- PR İş Akışı İlkesi: [docs/pr-workflow.md](docs/pr-workflow.md)
+- Gözden Geçiren Playbook'u (triaj + derinlemesine gözden geçirme): [docs/reviewer-playbook.md](docs/reviewer-playbook.md)
+- Sahiplik ve CI Triaj Haritası: [docs/ci-map.md](docs/ci-map.md)
+- Güvenlik Açıklama İlkesi: [SECURITY.md](SECURITY.md)
+
+Dağıtım ve çalışma zamanı operasyonları için:
+
+- Ağ Dağıtımı Kılavuzu: [docs/network-deployment.md](docs/network-deployment.md)
+- Proxy Agent Playbook'u: [docs/proxy-agent-playbook.md](docs/proxy-agent-playbook.md)
+
+## ZeroClaw'ı Destekleyin
+
+ZeroClaw işinize yardımcı oluyorsa ve sürekli geliştirmeyi desteklemek istiyorsanız, buradan bağış yapabilirsiniz:
+
+
+
+### 🙏 Özel Teşekkürler
+
+Bu açık kaynak çalışmasını ilham veren ve besleyen topluluklara ve kurumlara içten teşekkürler:
+
+- **Harvard Üniversitesi** — entelektüel merakı teşvik ettikleri ve mümkün olanın sınırlarını zorladıkları için.
+- **MIT** — açık bilgiyi, açık kaynağı ve teknolojinin herkes için erişilebilir olması gerektiği inancını savundukları için.
+- **Sundai Club** — topluluk, enerji ve önemli şeyler inşa etme konusundaki amansız irade için.
+- **Dünya ve Ötesi** 🌍✨ — açık kaynağı iyi bir güç haline getiren her katılımcı, hayalper ve inşa edene. Bu senin için.
+
+En iyi fikirler her yerden geldiği için açık kaynakta inşa ediyoruz. Bunu okuyorsan, bunun bir parçasısın. Hoş geldin. 🦀❤️
+
+## ⚠️ Resmi Depo ve Taklit Uyarısı
+
+**Bu tek resmi ZeroClaw deposudur:**
+
+>
+
+ZeroClaw olduğunu iddia eden veya ZeroClaw Labs ile bağlantıyı ima eden başka herhangi bir depo, organizasyon, etki alanı veya paket **yetkisizdir ve bu projeyle bağlantılı değildir**. Bilinen yetkisiz forklar [TRADEMARK.md](TRADEMARK.md)'da listelenecektir.
+
+Taklit veya marka kötüye kullanımıyla karşılaşırsanız, lütfen [bir sorun açın](https://github.com/zeroclaw-labs/zeroclaw/issues).
+
+---
+
+## Lisans
+
+ZeroClaw, maksimum açıklık ve katılımcı koruma için çift lisanslıdır:
+
+| Lisans | Kullanım Durumları |
+| ---------------------------- | ------------------------------------------------------------ |
+| [MIT](LICENSE-MIT) | Açık kaynak, araştırma, akademik, kişisel kullanım |
+| [Apache 2.0](LICENSE-APACHE) | Patent koruması, kurumsal, ticari dağıtım |
+
+Lisanslardan birini seçebilirsiniz. **Katılımcılar otomatik olarak her ikisi altında da hak verir** — tam katılımcı anlaşması için [CLA.md](CLA.md)'ye bakın.
+
+### Marka
+
+**ZeroClaw** adı ve logosu, ZeroClaw Labs'ın tescilli markalarıdır. Bu lisans, onay veya bağlantı ima etmek için kullanım izni vermez. İzin verilen ve yasaklanan kullanımlar için [TRADEMARK.md](TRADEMARK.md)'e bakın.
+
+### Katılımcı Korumaları
+
+- Katkılarınızın **telif hakkını sizde tutarsınız**
+- **Patent hibesi** (Apache 2.0) sizi diğer katılımcıların patent iddialarından korur
+- Katkılarınız commit geçmişinde ve [NOTICE](NOTICE)'da **kalıcı olarak atfedilir**
+- Katkıda bulunarak marka hakları devredilmez
+
+## Katkıda Bulunma
+
+[CONTRIBUTING.md](CONTRIBUTING.md) ve [CLA.md](CLA.md)'ye bakın. Bir trait uygulayın, bir PR gönderin:
+
+- CI iş akışı kılavuzu: [docs/ci-map.md](docs/ci-map.md)
+- Yeni `Provider` → `src/providers/`
+- Yeni `Channel` → `src/channels/`
+- Yeni `Observer` → `src/observability/`
+- Yeni `Tool` → `src/tools/`
+- Yeni `Memory` → `src/memory/`
+- Yeni `Tunnel` → `src/tunnel/`
+- Yeni `Skill` → `~/.zeroclaw/workspace/skills//`
+
+---
+
+**ZeroClaw** — Sıfır yük. Sıfır ödün. Her yerde dağıtın. Her şeyi değiştirin. 🦀
+
+## Yıldız Geçmişi
+
+
diff --git a/README.uk.md b/README.uk.md
new file mode 100644
index 000000000..bcc72db59
--- /dev/null
+++ b/README.uk.md
@@ -0,0 +1,179 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ Нуль накладних витрат. Нуль компромісів. 100% Rust. 100% Агностичний.
+ ⚡️ Працює на $10 обладнанні з <5MB RAM: Це на 99% менше пам'яті ніж OpenClaw і на 98% дешевше ніж Mac mini!
+
+
+---
+
+## Що таке ZeroClaw?
+
+ZeroClaw — це легка, змінювана та розширювана інфраструктура AI-асистента, написана на Rust. Вона з'єднує різних LLM-провайдерів (Anthropic, OpenAI, Google, Ollama тощо) через уніфікований інтерфейс і підтримує багато каналів (Telegram, Matrix, CLI тощо).
+
+### Ключові особливості
+
+- **🦀 Написано на Rust**: Висока продуктивність, безпека пам'яті та абстракції без накладних витрат
+- **🔌 Агностичний до провайдерів**: Підтримка OpenAI, Anthropic, Google Gemini, Ollama та інших
+- **📱 Багатоканальність**: Telegram, Matrix (з E2EE), CLI та інші
+- **🧠 Плагінна пам'ять**: SQLite та Markdown бекенди
+- **🛠️ Розширювані інструменти**: Легко додавайте власні інструменти
+- **🔒 Безпека першочергово**: Зворотний проксі, дизайн з пріоритетом конфіденційності
+
+---
+
+## Швидкий старт
+
+### Вимоги
+
+- Rust 1.70+
+- API-ключ LLM-провайдера (Anthropic, OpenAI тощо)
+
+### Встановлення
+
+```bash
+# Клонуйте репозиторій
+git clone https://github.com/zeroclaw-labs/zeroclaw.git
+cd zeroclaw
+
+# Зберіть проект
+cargo build --release
+
+# Запустіть
+cargo run --release
+```
+
+### З Docker
+
+```bash
+docker run -d \
+ --name zeroclaw \
+ -e ANTHROPIC_API_KEY=your_key \
+ -v zeroclaw-data:/app/data \
+ zeroclaw/zeroclaw:latest
+```
+
+---
+
+## Конфігурація
+
+ZeroClaw використовує YAML-файл конфігурації. За замовчуванням він шукає `config.yaml`.
+
+```yaml
+# Провайдер за замовчуванням
+provider: anthropic
+
+# Конфігурація провайдерів
+providers:
+ anthropic:
+ api_key: ${ANTHROPIC_API_KEY}
+ model: claude-3-5-sonnet-20241022
+ openai:
+ api_key: ${OPENAI_API_KEY}
+ model: gpt-4o
+
+# Конфігурація пам'яті
+memory:
+ backend: sqlite
+ path: data/memory.db
+
+# Конфігурація каналів
+channels:
+ telegram:
+ token: ${TELEGRAM_BOT_TOKEN}
+```
+
+---
+
+## Документація
+
+Для детальної документації дивіться:
+
+- [Хаб документації](docs/README.md)
+- [Довідник команд](docs/commands-reference.md)
+- [Довідник провайдерів](docs/providers-reference.md)
+- [Довідник каналів](docs/channels-reference.md)
+- [Довідник конфігурації](docs/config-reference.md)
+
+---
+
+## Внесок
+
+Внески вітаються! Будь ласка, прочитайте [Керівництво з внеску](CONTRIBUTING.md).
+
+---
+
+## Ліцензія
+
+Цей проект має подвійну ліцензію:
+
+- MIT License
+- Apache License, версія 2.0
+
+Дивіться [LICENSE-APACHE](LICENSE-APACHE) та [LICENSE-MIT](LICENSE-MIT) для деталей.
+
+---
+
+## Спільнота
+
+- [Telegram](https://t.me/zeroclawlabs)
+- [Facebook Group](https://www.facebook.com/groups/zeroclaw)
+- [WeChat Group](https://zeroclawlabs.cn/group.jpg)
+
+---
+
+## Спонсори
+
+Якщо ZeroClaw корисний для вас, будь ласка, розгляньте можливість купити нам каву:
+
+[](https://buymeacoffee.com/argenistherose)
diff --git a/README.ur.md b/README.ur.md
new file mode 100644
index 000000000..d7265eb3d
--- /dev/null
+++ b/README.ur.md
@@ -0,0 +1,197 @@
+
+
+
+
+
ZeroClaw 🦀
+
+
+ صفر اوور ہیڈ۔ صفر سمجھوتہ۔ 100% رسٹ۔ 100% اگنوسٹک۔
+ ⚡️ $10 کے ہارڈویئر پر <5MB RAM کے ساتھ چلتا ہے: یہ OpenClaw سے 99% کم میموری اور Mac mini سے 98% سستا ہے!
+
+ZeroClaw ایک ہلکا، قابل تبدیلی اور توسیع پذیر AI اسسٹنٹ انفراسٹرکچر ہے جو رسٹ میں بنایا گیا ہے۔ یہ مختلف LLM فراہم کنندگان (Anthropic, OpenAI, Google, Ollama, وغیرہ) کو ایک متحد انٹرفیس کے ذریعے جوڑتا ہے اور متعدد چینلز (Telegram, Matrix, CLI, وغیرہ) کی حمایت کرتا ہے۔
+
+
+### اہم خصوصیات
+
+
+- **🦀 رسٹ میں لکھا گیا**: اعلیٰ کارکردگی، میموری سیورٹی، اور بغیر لاگت کے ایبسٹریکشن
+- **🔌 فراہم کنندہ-اگنوسٹک**: OpenAI, Anthropic, Google Gemini, Ollama, اور دیگر کی حمایت
+- **📱 ملٹی چینل**: Telegram, Matrix (E2EE کے ساتھ), CLI, اور دیگر
+- **🧠 پلگ ایبل میموری**: SQLite اور Markdown بیک اینڈ
+- **🛠️ قابل توسیع ٹولز**: آسانی سے کسٹم ٹولز شامل کریں
+- **🔒 سیورٹی فرسٹ**: ریورس پراکسی، پرائیویسی فرسٹ ڈیزائن
+
+
+---
+
+## فوری شروعات
+
+### ضروریات
+
+
+- Rust 1.70+
+- ایک LLM فراہم کنندہ API کی (Anthropic, OpenAI, وغیرہ)
+
+اگر ZeroClaw آپ کے لیے مفید ہے، تو براہ کرم ہمیں کافی خریدنے پر غور کریں:
+
+
+[](https://buymeacoffee.com/argenistherose)
From b454a9d3011ba4655367e98bcdb693ebf41d93da Mon Sep 17 00:00:00 2001
From: Simian Astronaut 7
Date: Mon, 9 Mar 2026 20:33:34 -0400
Subject: [PATCH 08/35] Removed unused file
---
.tmp_todo_probe | 0
tests/dockerignore_test.rs | 1 -
2 files changed, 1 deletion(-)
delete mode 100644 .tmp_todo_probe
diff --git a/.tmp_todo_probe b/.tmp_todo_probe
deleted file mode 100644
index e69de29bb..000000000
diff --git a/tests/dockerignore_test.rs b/tests/dockerignore_test.rs
index 8af6fa8ca..314526df5 100644
--- a/tests/dockerignore_test.rs
+++ b/tests/dockerignore_test.rs
@@ -334,7 +334,6 @@ async fn dockerignore_pattern_matching_edge_cases() {
assert!(is_excluded(&patterns, "target/debug/build"));
assert!(is_excluded(&patterns, "README.md"));
assert!(is_excluded(&patterns, "brain.db"));
- assert!(is_excluded(&patterns, ".tmp_todo_probe"));
assert!(is_excluded(&patterns, ".env"));
// Should NOT match
From a6e5a6ffda67157b2123b84bee0cb6c9dd89ee35 Mon Sep 17 00:00:00 2001
From: Simian Astronaut 7
Date: Mon, 9 Mar 2026 21:44:08 -0400
Subject: [PATCH 09/35] chore(docs): relocate assets and project docs to docs/
subdirectories
Move zeroclaw.png and zero-claw.jpeg to docs/assets/, CLA.md to
docs/contributing/cla.md, and TRADEMARK.md to docs/project/trademark.md.
Update all cross-references in root README files (en, fr, ja, ru, vi, zh-CN)
to point to new locations.
Co-Authored-By: Claude Opus 4.6
---
README.fr.md | 12 ++++++------
README.ja.md | 4 ++--
README.md | 12 ++++++------
README.ru.md | 4 ++--
README.vi.md | 12 ++++++------
README.zh-CN.md | 4 ++--
.../assets/zeroclaw-comparison.jpeg | Bin
zeroclaw.png => docs/assets/zeroclaw.png | Bin
CLA.md => docs/contributing/cla.md | 2 +-
TRADEMARK.md => docs/project/trademark.md | 0
10 files changed, 25 insertions(+), 25 deletions(-)
rename zero-claw.jpeg => docs/assets/zeroclaw-comparison.jpeg (100%)
rename zeroclaw.png => docs/assets/zeroclaw.png (100%)
rename CLA.md => docs/contributing/cla.md (97%)
rename TRADEMARK.md => docs/project/trademark.md (100%)
diff --git a/README.fr.md b/README.fr.md
index fdbc4cc45..6edbcc2e5 100644
--- a/README.fr.md
+++ b/README.fr.md
@@ -1,5 +1,5 @@
-
+
ZeroClaw 🦀
@@ -95,7 +95,7 @@ Benchmark rapide sur machine locale (macOS arm64, fév. 2026) normalisé pour ma
> Notes : Les résultats ZeroClaw sont mesurés sur des builds de production utilisant `/usr/bin/time -l`. OpenClaw nécessite le runtime Node.js (typiquement ~390 Mo de surcharge mémoire supplémentaire), tandis que NanoBot nécessite le runtime Python. PicoClaw et ZeroClaw sont des binaires statiques. Les chiffres RAM ci-dessus sont la mémoire runtime ; les exigences de compilation build-time sont plus élevées.
-
+
### Mesure locale reproductible
@@ -826,7 +826,7 @@ Nous construisons en open source parce que les meilleures idées viennent de par
>
-Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](TRADEMARK.md).
+Tout autre dépôt, organisation, domaine ou package prétendant être "ZeroClaw" ou impliquant une affiliation avec ZeroClaw Labs est **non autorisé et non affilié à ce projet**. Les forks non autorisés connus seront listés dans [TRADEMARK.md](docs/project/trademark.md).
Si vous rencontrez une usurpation d'identité ou une utilisation abusive de marque, veuillez [ouvrir une issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
@@ -841,11 +841,11 @@ ZeroClaw est sous double licence pour une ouverture maximale et la protection de
| [MIT](LICENSE-MIT) | Open-source, recherche, académique, usage personnel |
| [Apache 2.0](LICENSE-APACHE) | Protection de brevet, institutionnel, déploiement commercial |
-Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](CLA.md) pour l'accord de contributeur complet.
+Vous pouvez choisir l'une ou l'autre licence. **Les contributeurs accordent automatiquement des droits sous les deux** — voir [CLA.md](docs/contributing/cla.md) pour l'accord de contributeur complet.
### Marque
-Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](TRADEMARK.md) pour les utilisations permises et interdites.
+Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cette licence n'accorde pas la permission de les utiliser pour impliquer une approbation ou une affiliation. Voir [TRADEMARK.md](docs/project/trademark.md) pour les utilisations permises et interdites.
### Protections des Contributeurs
@@ -856,7 +856,7 @@ Le nom **ZeroClaw** et le logo sont des marques déposées de ZeroClaw Labs. Cet
## Contribuer
-Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](CLA.md). Implémentez un trait, soumettez une PR :
+Voir [CONTRIBUTING.md](CONTRIBUTING.md) et [CLA.md](docs/contributing/cla.md). Implémentez un trait, soumettez une PR :
- Guide de workflow CI : [docs/ci-map.md](docs/ci-map.md)
- Nouveau `Provider` → `src/providers/`
diff --git a/README.ja.md b/README.ja.md
index 848ae9cb1..4e5889e2f 100644
--- a/README.ja.md
+++ b/README.ja.md
@@ -1,5 +1,5 @@
@@ -95,7 +95,7 @@ Local machine quick benchmark (macOS arm64, Feb 2026) normalized for 0.8GHz edge
> Notes: ZeroClaw results are measured on release builds using `/usr/bin/time -l`. OpenClaw requires Node.js runtime (typically ~390MB additional memory overhead), while NanoBot requires Python runtime. PicoClaw and ZeroClaw are static binaries. The RAM figures above are runtime memory; build-time compilation requirements are higher.
-
+
### Reproducible local measurement
@@ -1110,7 +1110,7 @@ We're building in the open because the best ideas come from everywhere. If you'r
> https://github.com/zeroclaw-labs/zeroclaw
-Any other repository, organization, domain, or package claiming to be "ZeroClaw" or implying affiliation with ZeroClaw Labs is **unauthorized and not affiliated with this project**. Known unauthorized forks will be listed in [TRADEMARK.md](TRADEMARK.md).
+Any other repository, organization, domain, or package claiming to be "ZeroClaw" or implying affiliation with ZeroClaw Labs is **unauthorized and not affiliated with this project**. Known unauthorized forks will be listed in [TRADEMARK.md](docs/project/trademark.md).
If you encounter impersonation or trademark misuse, please [open an issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
@@ -1125,11 +1125,11 @@ ZeroClaw is dual-licensed for maximum openness and contributor protection:
| [MIT](LICENSE-MIT) | Open-source, research, academic, personal use |
| [Apache 2.0](LICENSE-APACHE) | Patent protection, institutional, commercial deployment |
-You may choose either license. **Contributors automatically grant rights under both** — see [CLA.md](CLA.md) for the full contributor agreement.
+You may choose either license. **Contributors automatically grant rights under both** — see [CLA.md](docs/contributing/cla.md) for the full contributor agreement.
### Trademark
-The **ZeroClaw** name and logo are trademarks of ZeroClaw Labs. This license does not grant permission to use them to imply endorsement or affiliation. See [TRADEMARK.md](TRADEMARK.md) for permitted and prohibited uses.
+The **ZeroClaw** name and logo are trademarks of ZeroClaw Labs. This license does not grant permission to use them to imply endorsement or affiliation. See [TRADEMARK.md](docs/project/trademark.md) for permitted and prohibited uses.
### Contributor Protections
@@ -1142,7 +1142,7 @@ The **ZeroClaw** name and logo are trademarks of ZeroClaw Labs. This license doe
New to ZeroClaw? Look for issues labeled [`good first issue`](https://github.com/zeroclaw-labs/zeroclaw/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) — see our [Contributing Guide](CONTRIBUTING.md#first-time-contributors) for how to get started.
-See [CONTRIBUTING.md](CONTRIBUTING.md) and [CLA.md](CLA.md). Implement a trait, submit a PR:
+See [CONTRIBUTING.md](CONTRIBUTING.md) and [CLA.md](docs/contributing/cla.md). Implement a trait, submit a PR:
- CI workflow guide: [docs/ci-map.md](docs/ci-map.md)
- New `Provider` → `src/providers/`
diff --git a/README.ru.md b/README.ru.md
index cfb10393e..3631a31f3 100644
--- a/README.ru.md
+++ b/README.ru.md
@@ -1,5 +1,5 @@
-
+
ZeroClaw 🦀(Русский)
@@ -87,7 +87,7 @@ ZeroClaw — это производительная и расширяемая
> Примечание: результаты ZeroClaw получены на release-сборке с помощью `/usr/bin/time -l`. OpenClaw требует Node.js runtime; только этот runtime обычно добавляет около 390MB дополнительного потребления памяти. NanoBot требует Python runtime. PicoClaw и ZeroClaw — статические бинарники.
@@ -95,7 +95,7 @@ Bảng này dành cho các thông báo quan trọng (thay đổi không tương
> Ghi chú: Kết quả ZeroClaw được đo trên release build sử dụng `/usr/bin/time -l`. OpenClaw yêu cầu runtime Node.js (thường thêm ~390MB bộ nhớ overhead), còn NanoBot yêu cầu runtime Python. PicoClaw và ZeroClaw là các static binary. Số RAM ở trên là bộ nhớ runtime; yêu cầu biên dịch lúc build-time sẽ cao hơn.
-
+
### Tự đo trên máy bạn
@@ -1003,7 +1003,7 @@ Chúng tôi xây dựng công khai vì ý tưởng hay đến từ khắp nơi.
**Đây là repository ZeroClaw chính thức duy nhất:**
>
-Bất kỳ repository, tổ chức, tên miền hay gói nào khác tuyên bố là "ZeroClaw" hoặc ngụ ý liên kết với ZeroClaw Labs đều là **không được ủy quyền và không liên kết với dự án này**. Các fork không được ủy quyền đã biết sẽ được liệt kê trong [TRADEMARK.md](TRADEMARK.md).
+Bất kỳ repository, tổ chức, tên miền hay gói nào khác tuyên bố là "ZeroClaw" hoặc ngụ ý liên kết với ZeroClaw Labs đều là **không được ủy quyền và không liên kết với dự án này**. Các fork không được ủy quyền đã biết sẽ được liệt kê trong [TRADEMARK.md](docs/project/trademark.md).
Nếu bạn phát hiện hành vi mạo danh hoặc lạm dụng nhãn hiệu, vui lòng [mở một issue](https://github.com/zeroclaw-labs/zeroclaw/issues).
@@ -1018,11 +1018,11 @@ ZeroClaw được cấp phép kép để tối đa hóa tính mở và bảo v
| [MIT](LICENSE-MIT) | Mã nguồn mở, nghiên cứu, học thuật, sử dụng cá nhân |
| [Apache 2.0](LICENSE-APACHE) | Bảo hộ bằng sáng chế, triển khai tổ chức, thương mại |
-Bạn có thể chọn một trong hai giấy phép. **Người đóng góp tự động cấp quyền theo cả hai** — xem [CLA.md](CLA.md) để biết thỏa thuận đóng góp đầy đủ.
+Bạn có thể chọn một trong hai giấy phép. **Người đóng góp tự động cấp quyền theo cả hai** — xem [CLA.md](docs/contributing/cla.md) để biết thỏa thuận đóng góp đầy đủ.
### Nhãn hiệu
-Tên **ZeroClaw** và logo là nhãn hiệu của ZeroClaw Labs. Giấy phép này không cấp phép sử dụng chúng để ngụ ý chứng thực hoặc liên kết. Xem [TRADEMARK.md](TRADEMARK.md) để biết các sử dụng được phép và bị cấm.
+Tên **ZeroClaw** và logo là nhãn hiệu của ZeroClaw Labs. Giấy phép này không cấp phép sử dụng chúng để ngụ ý chứng thực hoặc liên kết. Xem [TRADEMARK.md](docs/project/trademark.md) để biết các sử dụng được phép và bị cấm.
### Bảo vệ người đóng góp
@@ -1033,7 +1033,7 @@ Tên **ZeroClaw** và logo là nhãn hiệu của ZeroClaw Labs. Giấy phép n
## Đóng góp
-Xem [CONTRIBUTING.md](CONTRIBUTING.md) và [CLA.md](CLA.md). Triển khai một trait, gửi PR:
+Xem [CONTRIBUTING.md](CONTRIBUTING.md) và [CLA.md](docs/contributing/cla.md). Triển khai một trait, gửi PR:
- Hướng dẫn quy trình CI: [docs/ci-map.md](docs/ci-map.md)
- `Provider` mới → `src/providers/`
- `Channel` mới → `src/channels/`
diff --git a/README.zh-CN.md b/README.zh-CN.md
index 55f86ccab..f2f2699d3 100644
--- a/README.zh-CN.md
+++ b/README.zh-CN.md
@@ -1,5 +1,5 @@