feat(multi): LinkedIn tool, WhatsApp voice notes, and Anthropic OAuth fix (#3604)
* feat(tools): add native LinkedIn integration tool Add a config-gated LinkedIn tool that enables ZeroClaw to interact with LinkedIn's REST API via OAuth2. Supports creating posts, listing own posts, commenting, reacting, deleting posts, viewing engagement stats, and retrieving profile info. Architecture: - linkedin.rs: Tool trait impl with action-dispatched design - linkedin_client.rs: OAuth2 token management and API wrappers - Config-gated via [linkedin] enabled = false (default off) - Credentials loaded from workspace .env file - Automatic token refresh with line-targeted .env update 39 unit tests covering security enforcement, parameter validation, credential parsing, and token management. * feat(linkedin): configurable content strategy and API version - Expand LinkedInConfig with api_version and nested LinkedInContentConfig (rss_feeds, github_users, github_repos, topics, persona, instructions) - Add get_content_strategy tool action so agents can read config at runtime - Fix hardcoded LinkedIn API version 202402 (expired) → configurable, defaulting to 202602 - LinkedInClient accepts api_version as parameter instead of static header - 4 new tests (43 total), all passing * feat(linkedin): add multi-provider image generation for posts Add ImageGenerator with provider chain (DALL-E, Stability AI, Imagen, Flux) and SVG fallback card. LinkedIn tool create_post now supports generate_image parameter. Includes LinkedIn image upload (register → upload → reference), configurable provider priority, and 14 new tests. * feat(whatsapp): add voice note transcription and TTS voice replies - Add STT support: download incoming voice notes via wa-rs, transcribe with OpenAI Whisper (or Groq), send transcribed text to agent - Add TTS support: synthesize agent replies to Opus audio via OpenAI TTS, upload encrypted media, send as WhatsApp voice note (ptt=true) - Voice replies only trigger when user sends a voice note; text messages get text replies only. Flag is consumed after one use to prevent multiple voice notes per agent turn - Fix transcription module to support OpenAI API key (not just Groq): auto-detect provider from API URL, check ANTHROPIC_OAUTH_TOKEN / OPENAI_API_KEY / GROQ_API_KEY env vars in priority order - Add optional api_key field to TranscriptionConfig for explicit key - Add response_format: opus to OpenAI TTS for WhatsApp compatibility - Add channel capability note so agent knows TTS is automatic - Wire transcription + TTS config into WhatsApp Web channel builder * fix(providers): prefer ANTHROPIC_OAUTH_TOKEN over global api_key When the Anthropic provider is used alongside a non-Anthropic primary provider (e.g. custom: gateway), the global api_key would be passed as credential override, bypassing provider-specific env vars. This caused Claude Code subscription tokens (sk-ant-oat01-*) to be ignored in favor of the unrelated gateway JWT. Fix: for the anthropic provider, check ANTHROPIC_OAUTH_TOKEN and ANTHROPIC_API_KEY env vars before falling back to the credential override. This mirrors the existing MiniMax OAuth pattern and enables subscription-based auth to work as a fallback provider. * feat(linkedin): add scheduled post support via LinkedIn API Add scheduled_at parameter to create_post and create_post_with_image. When provided (RFC 3339 timestamp), the post is created as a DRAFT with scheduledPublishOptions so LinkedIn publishes it automatically at the specified time. This enables the cron job to schedule a week of posts in advance directly on LinkedIn. * fix(providers): prefer env vars for openai and groq credential resolution Generalize the Anthropic OAuth fix to also cover openai and groq providers. When used alongside a non-matching primary provider (e.g. a custom: gateway), the global api_key would be passed as credential override, causing auth failures. Now checks provider-specific env vars (OPENAI_API_KEY, GROQ_API_KEY) before falling back to the credential override. * fix(whatsapp): debounce voice replies to voice final answer only The voice note TTS was triggering on the first send() call, which was often intermediate tool output (URLs, JSON, web fetch results) rather than the actual answer. This produced incomprehensible voice notes. Fix: accumulate substantive replies (>30 chars, not URLs/JSON/code) in a pending_voice map. A spawned debounce task waits 4 seconds after the last substantive message, then synthesizes and sends ONE voice note with the final answer. Intermediate tool outputs are skipped. This ensures the user hears the actual answer in the correct language, not raw tool output in English. * fix(whatsapp): voice in = voice out, text in = text out Rewrite voice reply logic with clean separation: - Voice note received: ALL text output suppressed. Latest message accumulated silently. After 5s of no new messages, ONE voice note sent with the final answer. No tool outputs, no text, just voice. - Text received: normal text reply, no voice. Atomic debounce: multiple spawned tasks race but only one can extract the pending message (remove-inside-lock pattern). Prevents duplicate voice notes. * fix(whatsapp): voice replies send both text and voice note Voice note in → text replies sent normally in real-time PLUS one voice note with the final answer after 10s debounce. Only substantive natural-language messages are voiced (tool outputs, URLs, JSON, code blocks filtered out). Longer debounce (10s) ensures the agent completes its full tool chain before the voice note fires. Text in → text out only, no voice. * fix(channels): suppress tool narration and ack reactions - Add system prompt instruction telling the agent to NEVER narrate tool usage (no "Let me fetch..." or "I will use http_request...") - Disable ack_reactions (emoji reactions on incoming messages) - Users see only the final answer, no intermediate steps * docs(claude): add full CONTRIBUTING.md guidelines to CLAUDE.md Add PR template requirements, code naming conventions, architecture boundary rules, validation commands, and branch naming guidance directly to CLAUDE.md for AI assistant reference. * fix(docs): add blank lines around headings in CLAUDE.md for markdown lint * fix(channels): strengthen tool narration suppression and fix large_futures - Move anti-narration instruction to top of channel system prompt - Add emphatic instruction for WhatsApp/voice channels specifically - Add outbound message filter to strip tool-call-like patterns (⏳, 🔧) - Box::pin the two-phase heartbeat agent::run call (16664 bytes on Linux)
This commit is contained in:
parent
220745e217
commit
906951a587
91
CLAUDE.md
91
CLAUDE.md
@ -31,6 +31,9 @@ Key extension points:
|
||||
- `src/observability/traits.rs` (`Observer`)
|
||||
- `src/runtime/traits.rs` (`RuntimeAdapter`)
|
||||
- `src/peripherals/traits.rs` (`Peripheral`) — hardware boards (STM32, RPi GPIO)
|
||||
- `src/security/taint.rs` (`TaintLabel`, `TaintSource`) — information flow tracking
|
||||
- `src/sop/workflow.rs` (`WorkflowStep`, `StepHandler`) — workflow DAG engine
|
||||
- `src/hands/types.rs` (`Hand`, `HandContext`) — autonomous agent packages
|
||||
|
||||
## Repository Map
|
||||
|
||||
@ -44,6 +47,8 @@ Key extension points:
|
||||
- `src/providers/` — model providers and resilient wrapper
|
||||
- `src/channels/` — Telegram/Discord/Slack/etc channels
|
||||
- `src/tools/` — tool execution surface (shell, file, memory, browser)
|
||||
- `src/hands/` — autonomous knowledge-accumulating agent packages (Hands system)
|
||||
- `src/sop/` — standard operating procedures + workflow DAG engine
|
||||
- `src/peripherals/` — hardware peripherals (STM32, RPi GPIO)
|
||||
- `src/runtime/` — runtime adapters (currently native)
|
||||
- `docs/` — topic-based documentation (setup-guides, reference, ops, security, hardware, contributing, maintainers)
|
||||
@ -83,8 +88,94 @@ Branch/commit/PR rules:
|
||||
- Do not hide behavior-changing side effects in refactor commits.
|
||||
- Do not include personal identity or sensitive information in test data, examples, docs, or commits.
|
||||
|
||||
## Contributing Guidelines (from CONTRIBUTING.md)
|
||||
|
||||
### Branch & PR Rules
|
||||
|
||||
- `master` is the ONLY default branch. There is no `main` branch.
|
||||
- Fork the repo, create `feat/*` or `fix/*` branch from `master`, open PR targeting `master`.
|
||||
- IMPORTANT: Use `feat/` NOT `feature/` — the prefix must be `feat/` or `fix/` exactly.
|
||||
- Use conventional commit titles (e.g., `feat(channels):`, `fix(security):`, `chore(ci):`).
|
||||
- Complete **every section** in `.github/pull_request_template.md` — all 15 sections are mandatory.
|
||||
- One concern per PR. Prefer size `XS/S/M`. Split large work into stacked PRs.
|
||||
- If replacing an older PR, add `Supersedes #...` and request maintainers close the old one.
|
||||
|
||||
### PR Template (All Sections Required)
|
||||
|
||||
1. Summary (base branch, problem, what changed, what didn't change)
|
||||
2. Label Snapshot (risk, size, scope, module labels)
|
||||
3. Change Metadata (change type, primary scope)
|
||||
4. Linked Issue
|
||||
5. Validation Evidence (`cargo fmt`, `cargo clippy`, `cargo test` results)
|
||||
6. Security Impact (permissions, network calls, secrets, file access)
|
||||
7. Privacy and Data Hygiene (pass/needs-follow-up, neutral wording)
|
||||
8. Compatibility / Migration (backward compatible, config changes, migration)
|
||||
9. i18n Follow-Through (triggered? locale parity updated?)
|
||||
10. Human Verification (verified scenarios, edge cases, gaps)
|
||||
11. Side Effects / Blast Radius (affected subsystems, unintended effects, guardrails)
|
||||
12. Rollback Plan (fast rollback, feature flags, failure symptoms)
|
||||
13. Risks and Mitigations
|
||||
|
||||
### Collaboration Tracks (Risk-Based)
|
||||
|
||||
- **Track A (Low risk)**: docs/tests/chore — 1 maintainer review + green CI
|
||||
- **Track B (Medium risk)**: providers/channels/memory/tools behavior — 1 subsystem-aware review + validation evidence
|
||||
- **Track C (High risk)**: `src/security/**`, `src/runtime/**`, `src/gateway/**`, `.github/workflows/**` — 2-pass review, rollback plan required
|
||||
|
||||
### Code Naming Conventions
|
||||
|
||||
- Rust casing: modules/files `snake_case`, types/traits/enums `PascalCase`, functions/variables `snake_case`, constants `SCREAMING_SNAKE_CASE`
|
||||
- Domain-first naming: `DiscordChannel`, `SecurityPolicy`, `SqliteMemory` (not `Manager`, `Helper`, `Util`)
|
||||
- Trait implementers: `*Provider`, `*Channel`, `*Tool`, `*Memory`, `*Observer`
|
||||
- Factory keys: lowercase, stable (`openai`, `discord`, `shell`)
|
||||
- Tests: behavior-oriented names (`subject_expected_behavior`), neutral project-scoped fixtures
|
||||
- Identity labels: use ZeroClaw-native identifiers only (`ZeroClawAgent`, `zeroclaw_user`, `zeroclaw_node`)
|
||||
|
||||
### Architecture Boundary Rules
|
||||
|
||||
- Extend via trait implementations + factory registration before considering broad refactors.
|
||||
- Dependency direction: concrete integrations depend on shared traits/config/util, not on each other.
|
||||
- No cross-subsystem coupling (provider <-> channel internals, tools mutating security directly).
|
||||
- Single-purpose modules: `agent` = orchestration, `channels` = transport, `providers` = model I/O, `security` = policy, `tools` = execution, `memory` = persistence.
|
||||
- Shared abstractions only after rule-of-three (3+ stable callers).
|
||||
- `src/config/schema.rs` keys are public contract — document compatibility, migration, rollback.
|
||||
|
||||
### Validation Commands
|
||||
|
||||
```bash
|
||||
# Required before every PR
|
||||
|
||||
cargo fmt --all -- --check
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
cargo test --locked
|
||||
|
||||
# Full quality gate
|
||||
|
||||
./scripts/ci/rust_quality_gate.sh
|
||||
|
||||
# Strict delta lint (changed lines only)
|
||||
|
||||
./scripts/ci/rust_strict_delta_gate.sh
|
||||
```
|
||||
|
||||
### What Must Never Be Committed
|
||||
|
||||
- `.env` files (use `.env.example` only)
|
||||
- API keys, tokens, passwords, credentials (plain or encrypted)
|
||||
- OAuth tokens, session identifiers, webhook signing secrets
|
||||
- `~/.zeroclaw/.secret_key` or similar key files
|
||||
- Personal identifiers or real user data in tests/fixtures
|
||||
|
||||
### CI Details
|
||||
|
||||
- Quality Gate: Lint (fmt+clippy), Build (x86_64-linux + aarch64-darwin), Test, Security Audit
|
||||
- CI uses Rust 1.92.0 (may differ from local toolchain)
|
||||
- `clippy::large_futures` triggers at 16KB on Linux — fix with `Box::pin()`
|
||||
- Windows-only test failures (Unix path assumptions) don't affect CI
|
||||
|
||||
## Linked References
|
||||
|
||||
- `@docs/contributing/change-playbooks.md` — adding providers, channels, tools, peripherals; security/gateway changes; architecture boundaries
|
||||
- `@docs/contributing/pr-discipline.md` — privacy rules, superseded-PR attribution/templates, handoff template
|
||||
- `@docs/contributing/docs-contract.md` — docs system contract, i18n rules, locale parity
|
||||
- `CONTRIBUTING.md` — full contributor contract, collaboration tracks, PR template, naming conventions
|
||||
|
||||
314
docs/superpowers/specs/2026-03-13-linkedin-tool-design.md
Normal file
314
docs/superpowers/specs/2026-03-13-linkedin-tool-design.md
Normal file
@ -0,0 +1,314 @@
|
||||
# LinkedIn Tool — Design Spec
|
||||
|
||||
**Date:** 2026-03-13
|
||||
**Status:** Approved
|
||||
**Risk tier:** Medium (new tool, external API, credential handling)
|
||||
|
||||
## Summary
|
||||
|
||||
Native LinkedIn integration tool for ZeroClaw. Enables the agent to create posts,
|
||||
list its own posts, comment, react, delete posts, view post engagement, and retrieve
|
||||
profile info — all through LinkedIn's official REST API with OAuth2 authentication.
|
||||
|
||||
## Motivation
|
||||
|
||||
Enable ZeroClaw to autonomously publish LinkedIn content on a schedule (via cron),
|
||||
drawing from the user's memory, project history, and Medium feed. Removes dependency
|
||||
on third-party platforms like Composio for social media posting.
|
||||
|
||||
## Required OAuth2 scopes
|
||||
|
||||
Users must grant these scopes when creating their LinkedIn Developer App:
|
||||
|
||||
| Scope | Required for |
|
||||
|---|---|
|
||||
| `w_member_social` | `create_post`, `comment`, `react`, `delete_post` |
|
||||
| `r_liteprofile` | `get_profile` |
|
||||
| `r_member_social` | `list_posts`, `get_engagement` |
|
||||
|
||||
The "Share on LinkedIn" and "Sign In with LinkedIn using OpenID Connect" products
|
||||
must be requested in the LinkedIn Developer App dashboard (both auto-approve).
|
||||
|
||||
## Architecture
|
||||
|
||||
### File structure
|
||||
|
||||
| File | Role |
|
||||
|---|---|
|
||||
| `src/tools/linkedin.rs` | `Tool` trait impl, action dispatch, parameter validation |
|
||||
| `src/tools/linkedin_client.rs` | OAuth2 token management, LinkedIn REST API wrappers |
|
||||
| `src/tools/mod.rs` | Module declaration, pub use, registration in `all_tools_with_runtime` |
|
||||
| `src/config/schema.rs` | `[linkedin]` config section (`LinkedInConfig`) |
|
||||
| `src/config/mod.rs` | Add `LinkedInConfig` to pub use exports |
|
||||
|
||||
### No new dependencies
|
||||
|
||||
All required crates are already in `Cargo.toml`: `reqwest` (HTTP), `serde`/`serde_json`
|
||||
(serialization), `chrono` (timestamps), `tokio` (async fs for .env reading).
|
||||
|
||||
## Config
|
||||
|
||||
### `config.toml`
|
||||
|
||||
```toml
|
||||
[linkedin]
|
||||
enabled = false
|
||||
```
|
||||
|
||||
### `.env` credentials
|
||||
|
||||
```bash
|
||||
LINKEDIN_CLIENT_ID=your_client_id
|
||||
LINKEDIN_CLIENT_SECRET=your_client_secret
|
||||
LINKEDIN_ACCESS_TOKEN=your_access_token
|
||||
LINKEDIN_REFRESH_TOKEN=your_refresh_token
|
||||
LINKEDIN_PERSON_ID=your_person_urn_id
|
||||
```
|
||||
|
||||
Token format: `LINKEDIN_PERSON_ID` is the bare ID (e.g., `dXNlcjpA...`), not the
|
||||
full URN. The client prefixes `urn:li:person:` internally.
|
||||
|
||||
## Tool design
|
||||
|
||||
### Single tool, action-dispatched
|
||||
|
||||
Tool name: `linkedin`
|
||||
|
||||
The LLM calls it with an `action` field and action-specific parameters:
|
||||
|
||||
```json
|
||||
{ "action": "create_post", "text": "...", "visibility": "PUBLIC" }
|
||||
```
|
||||
|
||||
### Actions
|
||||
|
||||
| Action | Params | API | Write? |
|
||||
|---|---|---|---|
|
||||
| `create_post` | `text`, `visibility?` (PUBLIC/CONNECTIONS, default PUBLIC), `article_url?`, `article_title?` | `POST /rest/posts` | Yes |
|
||||
| `list_posts` | `count?` (default 10, max 50) | `GET /rest/posts?author={personUrn}&q=author` | No |
|
||||
| `comment` | `post_id`, `text` | `POST /rest/socialActions/{id}/comments` | Yes |
|
||||
| `react` | `post_id`, `reaction_type` (LIKE/CELEBRATE/SUPPORT/LOVE/INSIGHTFUL/FUNNY) | `POST /rest/reactions?actor={actorUrn}` | Yes |
|
||||
| `delete_post` | `post_id` | `DELETE /rest/posts/{id}` | Yes |
|
||||
| `get_engagement` | `post_id` | `GET /rest/socialActions/{id}` | No |
|
||||
| `get_profile` | (none) | `GET /rest/me` | No |
|
||||
|
||||
Note: `list_posts` queries posts authored by the authenticated user (not a home feed —
|
||||
LinkedIn does not expose a home feed API). `get_engagement` returns likes/comments/shares
|
||||
counts for a specific post via the socialActions endpoint.
|
||||
|
||||
### Security enforcement
|
||||
|
||||
- Write actions (`create_post`, `comment`, `react`, `delete_post`): check `security.can_act()` + `security.record_action()`
|
||||
- Read actions (`list_posts`, `get_engagement`, `get_profile`): still call `record_action()` for rate tracking
|
||||
|
||||
### Parameter validation
|
||||
|
||||
- `article_title` without `article_url` returns error: "article_title requires article_url"
|
||||
- `react` requires both `post_id` and `reaction_type`
|
||||
- `comment` requires both `post_id` and `text`
|
||||
- `create_post` requires `text` (non-empty)
|
||||
|
||||
### Parameter schema
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["create_post", "list_posts", "comment", "react", "delete_post", "get_engagement", "get_profile"],
|
||||
"description": "The LinkedIn action to perform"
|
||||
},
|
||||
"text": {
|
||||
"type": "string",
|
||||
"description": "Post or comment text content"
|
||||
},
|
||||
"visibility": {
|
||||
"type": "string",
|
||||
"enum": ["PUBLIC", "CONNECTIONS"],
|
||||
"description": "Post visibility (default: PUBLIC)"
|
||||
},
|
||||
"article_url": {
|
||||
"type": "string",
|
||||
"description": "URL to attach as article/link preview"
|
||||
},
|
||||
"article_title": {
|
||||
"type": "string",
|
||||
"description": "Title for the attached article (requires article_url)"
|
||||
},
|
||||
"post_id": {
|
||||
"type": "string",
|
||||
"description": "LinkedIn post URN for comment/react/delete/engagement"
|
||||
},
|
||||
"reaction_type": {
|
||||
"type": "string",
|
||||
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
|
||||
"description": "Reaction type for the react action"
|
||||
},
|
||||
"count": {
|
||||
"type": "integer",
|
||||
"description": "Number of posts to retrieve (default 10, max 50)"
|
||||
}
|
||||
},
|
||||
"required": ["action"]
|
||||
}
|
||||
```
|
||||
|
||||
## LinkedIn client
|
||||
|
||||
### `LinkedInClient` struct
|
||||
|
||||
```rust
|
||||
pub struct LinkedInClient {
|
||||
workspace_dir: PathBuf,
|
||||
}
|
||||
```
|
||||
|
||||
Uses `crate::config::build_runtime_proxy_client_with_timeouts("tool.linkedin", 30, 10)`
|
||||
per request (same pattern as Pushover), respecting runtime proxy configuration.
|
||||
|
||||
### Credential loading
|
||||
|
||||
Same pattern as `PushoverTool`: reads `.env` from `workspace_dir`, parses key-value
|
||||
pairs, supports `export` prefix and quoted values.
|
||||
|
||||
### Token refresh
|
||||
|
||||
1. All API calls use `LINKEDIN_ACCESS_TOKEN` in `Authorization: Bearer` header
|
||||
2. On 401 response, attempt token refresh:
|
||||
- `POST https://www.linkedin.com/oauth/v2/accessToken`
|
||||
- Body: `grant_type=refresh_token&refresh_token=...&client_id=...&client_secret=...`
|
||||
3. On successful refresh, update `LINKEDIN_ACCESS_TOKEN` in `.env` file via
|
||||
line-targeted replacement (read all lines, replace the matching key line, write back).
|
||||
Preserves `export` prefixes, quoting style, comments, and all other keys.
|
||||
4. Retry the original request once
|
||||
5. If refresh also fails, return error with clear message about re-authentication
|
||||
|
||||
### API versioning
|
||||
|
||||
All requests include:
|
||||
- `LinkedIn-Version: 202402` header (stable version)
|
||||
- `X-Restli-Protocol-Version: 2.0.0` header
|
||||
- `Content-Type: application/json`
|
||||
|
||||
### React endpoint details
|
||||
|
||||
The `react` action sends:
|
||||
- `POST /rest/reactions?actor=urn:li:person:{personId}`
|
||||
- Body: `{"reactionType": "LIKE", "object": "urn:li:ugcPost:{postId}"}`
|
||||
|
||||
The actor URN is derived from `LINKEDIN_PERSON_ID` in `.env`.
|
||||
|
||||
### Response parsing
|
||||
|
||||
The client returns structured data types:
|
||||
|
||||
```rust
|
||||
pub struct PostSummary {
|
||||
pub id: String,
|
||||
pub text: String,
|
||||
pub created_at: String,
|
||||
pub visibility: String,
|
||||
}
|
||||
|
||||
pub struct ProfileInfo {
|
||||
pub id: String,
|
||||
pub name: String,
|
||||
pub headline: String,
|
||||
}
|
||||
|
||||
pub struct EngagementSummary {
|
||||
pub likes: u64,
|
||||
pub comments: u64,
|
||||
pub shares: u64,
|
||||
}
|
||||
```
|
||||
|
||||
## Registration
|
||||
|
||||
In `src/tools/mod.rs` (follows `security_ops` config-gated pattern):
|
||||
|
||||
```rust
|
||||
// Module declarations
|
||||
pub mod linkedin;
|
||||
pub mod linkedin_client;
|
||||
|
||||
// Re-exports
|
||||
pub use linkedin::LinkedInTool;
|
||||
|
||||
// In all_tools_with_runtime():
|
||||
if root_config.linkedin.enabled {
|
||||
tool_arcs.push(Arc::new(LinkedInTool::new(
|
||||
security.clone(),
|
||||
workspace_dir.to_path_buf(),
|
||||
)));
|
||||
}
|
||||
```
|
||||
|
||||
## Config schema
|
||||
|
||||
In `src/config/schema.rs`:
|
||||
|
||||
```rust
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct LinkedInConfig {
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
impl Default for LinkedInConfig {
|
||||
fn default() -> Self {
|
||||
Self { enabled: false }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Added as field `pub linkedin: LinkedInConfig` on the `Config` struct.
|
||||
Added to `pub use` exports in `src/config/mod.rs`.
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit tests (in `linkedin.rs`)
|
||||
|
||||
- Tool name, description, schema validation
|
||||
- Action dispatch routes correctly
|
||||
- Write actions blocked in read-only mode
|
||||
- Write actions blocked by rate limiting
|
||||
- Missing required params return clear errors
|
||||
- Unknown action returns error
|
||||
- `article_title` without `article_url` returns validation error
|
||||
|
||||
### Unit tests (in `linkedin_client.rs`)
|
||||
|
||||
- Credential parsing from `.env` (plain, quoted, export prefix, comments)
|
||||
- Missing credential fields produce specific errors
|
||||
- Token refresh writes updated token back to `.env` preserving other keys
|
||||
- Post creation builds correct request body with URN formatting
|
||||
- React builds correct query param with actor URN
|
||||
- Visibility defaults to PUBLIC when omitted
|
||||
|
||||
### Registry tests (in `mod.rs`)
|
||||
|
||||
- `all_tools` excludes `linkedin` when `linkedin.enabled = false`
|
||||
- `all_tools` includes `linkedin` when `linkedin.enabled = true`
|
||||
|
||||
### Integration tests
|
||||
|
||||
Not added in this PR — would require live LinkedIn API credentials.
|
||||
A `#[cfg(feature = "test-linkedin-live")]` gate can be added later.
|
||||
|
||||
## Error handling
|
||||
|
||||
- Missing `.env` file: "LinkedIn credentials not found. Add LINKEDIN_* keys to .env"
|
||||
- Missing specific key: "LINKEDIN_ACCESS_TOKEN not found in .env"
|
||||
- Expired token + no refresh token: "LinkedIn token expired. Re-authenticate or add LINKEDIN_REFRESH_TOKEN to .env"
|
||||
- `article_title` without `article_url`: "article_title requires article_url to be set"
|
||||
- API errors: pass through LinkedIn's error message with status code
|
||||
- Rate limited by LinkedIn: "LinkedIn API rate limit exceeded. Try again later."
|
||||
- Missing scope: "LinkedIn API returned 403. Ensure your app has the required scopes: w_member_social, r_liteprofile, r_member_social"
|
||||
|
||||
## PR metadata
|
||||
|
||||
- **Branch:** `feature/linkedin-tool`
|
||||
- **Title:** `feat(tools): add native LinkedIn integration tool`
|
||||
- **Risk:** Medium — new tool, external API, no security boundary changes
|
||||
- **Size target:** M (2 new files ~200-300 lines each, 3-4 modified files)
|
||||
@ -1499,7 +1499,68 @@ fn sanitize_channel_response(response: &str, tools: &[Box<dyn Tool>]) -> String
|
||||
.iter()
|
||||
.map(|tool| tool.name().to_ascii_lowercase())
|
||||
.collect();
|
||||
strip_isolated_tool_json_artifacts(response, &known_tool_names)
|
||||
// Strip XML-style tool-call tags (e.g. <tool_call>...</tool_call>)
|
||||
let stripped_xml = strip_tool_call_tags(response);
|
||||
// Strip isolated tool-call JSON artifacts
|
||||
let stripped_json = strip_isolated_tool_json_artifacts(&stripped_xml, &known_tool_names);
|
||||
// Strip leading narration lines that announce tool usage
|
||||
strip_tool_narration(&stripped_json)
|
||||
}
|
||||
|
||||
/// Remove leading lines that narrate tool usage (e.g. "Let me check the weather for you.").
|
||||
///
|
||||
/// Only strips lines from the very beginning of the message that match common
|
||||
/// narration patterns, so genuine content is preserved.
|
||||
fn strip_tool_narration(message: &str) -> String {
|
||||
let narration_prefixes: &[&str] = &[
|
||||
"let me ",
|
||||
"i'll ",
|
||||
"i will ",
|
||||
"i am going to ",
|
||||
"i'm going to ",
|
||||
"searching ",
|
||||
"looking up ",
|
||||
"fetching ",
|
||||
"checking ",
|
||||
"using the ",
|
||||
"using my ",
|
||||
"one moment",
|
||||
"hold on",
|
||||
"just a moment",
|
||||
"give me a moment",
|
||||
"allow me to ",
|
||||
];
|
||||
|
||||
let mut result_lines: Vec<&str> = Vec::new();
|
||||
let mut past_narration = false;
|
||||
|
||||
for line in message.lines() {
|
||||
if past_narration {
|
||||
result_lines.push(line);
|
||||
continue;
|
||||
}
|
||||
let trimmed = line.trim();
|
||||
if trimmed.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let lower = trimmed.to_lowercase();
|
||||
if narration_prefixes.iter().any(|p| lower.starts_with(p)) {
|
||||
// Skip this narration line
|
||||
continue;
|
||||
}
|
||||
// First non-narration, non-empty line — keep everything from here
|
||||
past_narration = true;
|
||||
result_lines.push(line);
|
||||
}
|
||||
|
||||
let joined = result_lines.join("\n");
|
||||
let trimmed = joined.trim();
|
||||
if trimmed.is_empty() && !message.trim().is_empty() {
|
||||
// If stripping removed everything, return original to avoid empty reply
|
||||
message.to_string()
|
||||
} else {
|
||||
trimmed.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
fn is_tool_call_payload(value: &serde_json::Value, known_tool_names: &HashSet<String>) -> bool {
|
||||
@ -2697,6 +2758,17 @@ pub fn build_system_prompt_with_mode(
|
||||
use std::fmt::Write;
|
||||
let mut prompt = String::with_capacity(8192);
|
||||
|
||||
// ── 0. Anti-narration (top priority) ───────────────────────
|
||||
prompt.push_str(
|
||||
"## CRITICAL: No Tool Narration\n\n\
|
||||
NEVER narrate, announce, describe, or explain your tool usage to the user. \
|
||||
Do NOT say things like 'Let me check...', 'I will use http_request to...', \
|
||||
'I'll fetch that for you', 'Searching now...', or 'Using the web_search tool'. \
|
||||
The user must ONLY see the final answer. Tool calls are invisible infrastructure — \
|
||||
never reference them. If you catch yourself starting a sentence about what tool \
|
||||
you are about to use or just used, DELETE it and give the answer directly.\n\n",
|
||||
);
|
||||
|
||||
// ── 1. Tooling ──────────────────────────────────────────────
|
||||
if !tools.is_empty() {
|
||||
prompt.push_str("## Tools\n\n");
|
||||
@ -2836,7 +2908,9 @@ pub fn build_system_prompt_with_mode(
|
||||
prompt.push_str("- You are running as a messaging bot. Your response is automatically sent back to the user's channel.\n");
|
||||
prompt.push_str("- You do NOT need to ask permission to respond — just respond directly.\n");
|
||||
prompt.push_str("- NEVER repeat, describe, or echo credentials, tokens, API keys, or secrets in your responses.\n");
|
||||
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n\n");
|
||||
prompt.push_str("- If a tool output contains credentials, they have already been redacted — do not mention them.\n");
|
||||
prompt.push_str("- When a user sends a voice note, it is automatically transcribed to text. Your text reply is automatically converted to a voice note and sent back. Do NOT attempt to generate audio yourself — TTS is handled by the channel.\n");
|
||||
prompt.push_str("- NEVER narrate or describe your tool usage. Do NOT say 'Let me fetch...', 'I will use...', 'Searching...', or similar. Give the FINAL ANSWER only — no intermediate steps, no tool mentions, no progress updates.\n\n");
|
||||
|
||||
if prompt.is_empty() {
|
||||
"You are ZeroClaw, a fast and efficient AI assistant built in Rust. Be helpful, concise, and direct."
|
||||
@ -3346,7 +3420,8 @@ fn collect_configured_channels(
|
||||
wa.pair_code.clone(),
|
||||
wa.allowed_numbers.clone(),
|
||||
)
|
||||
.with_transcription(config.transcription.clone()),
|
||||
.with_transcription(config.transcription.clone())
|
||||
.with_tts(config.tts.clone()),
|
||||
),
|
||||
});
|
||||
} else {
|
||||
|
||||
@ -39,6 +39,47 @@ fn normalize_audio_filename(file_name: &str) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Resolve the API key for voice transcription.
|
||||
///
|
||||
/// Priority order:
|
||||
/// 1. Explicit `config.api_key` (if set and non-empty).
|
||||
/// 2. Provider-specific env var based on `api_url`:
|
||||
/// - URL contains "openai.com" -> `OPENAI_API_KEY`
|
||||
/// - URL contains "groq.com" -> `GROQ_API_KEY`
|
||||
/// 3. Fallback chain: `TRANSCRIPTION_API_KEY` -> `GROQ_API_KEY` -> `OPENAI_API_KEY`.
|
||||
fn resolve_transcription_api_key(config: &TranscriptionConfig) -> Result<String> {
|
||||
// 1. Explicit config key
|
||||
if let Some(ref key) = config.api_key {
|
||||
let trimmed = key.trim();
|
||||
if !trimmed.is_empty() {
|
||||
return Ok(trimmed.to_string());
|
||||
}
|
||||
}
|
||||
|
||||
// 2. Provider-specific env var based on API URL
|
||||
if config.api_url.contains("openai.com") {
|
||||
if let Ok(key) = std::env::var("OPENAI_API_KEY") {
|
||||
return Ok(key);
|
||||
}
|
||||
} else if config.api_url.contains("groq.com") {
|
||||
if let Ok(key) = std::env::var("GROQ_API_KEY") {
|
||||
return Ok(key);
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Fallback chain
|
||||
for var in ["TRANSCRIPTION_API_KEY", "GROQ_API_KEY", "OPENAI_API_KEY"] {
|
||||
if let Ok(key) = std::env::var(var) {
|
||||
return Ok(key);
|
||||
}
|
||||
}
|
||||
|
||||
bail!(
|
||||
"No API key found for voice transcription — set one of: \
|
||||
transcription.api_key in config, TRANSCRIPTION_API_KEY, GROQ_API_KEY, or OPENAI_API_KEY"
|
||||
);
|
||||
}
|
||||
|
||||
/// Validate audio data and resolve MIME type from file name.
|
||||
///
|
||||
/// Returns `(normalized_filename, mime_type)` on success.
|
||||
@ -710,8 +751,10 @@ mod tests {
|
||||
|
||||
#[tokio::test]
|
||||
async fn rejects_missing_api_key() {
|
||||
// Ensure fallback env key is absent for this test.
|
||||
// Ensure all candidate keys are absent for this test.
|
||||
std::env::remove_var("GROQ_API_KEY");
|
||||
std::env::remove_var("OPENAI_API_KEY");
|
||||
std::env::remove_var("TRANSCRIPTION_API_KEY");
|
||||
|
||||
let data = vec![0u8; 100];
|
||||
let config = TranscriptionConfig::default();
|
||||
|
||||
@ -85,6 +85,7 @@ impl TtsProvider for OpenAiTtsProvider {
|
||||
"input": text,
|
||||
"voice": voice,
|
||||
"speed": self.speed,
|
||||
"response_format": "opus",
|
||||
});
|
||||
|
||||
let resp = self
|
||||
|
||||
@ -64,8 +64,17 @@ pub struct WhatsAppWebChannel {
|
||||
client: Arc<Mutex<Option<Arc<wa_rs::Client>>>>,
|
||||
/// Message sender channel
|
||||
tx: Arc<Mutex<Option<tokio::sync::mpsc::Sender<ChannelMessage>>>>,
|
||||
/// Voice transcription configuration
|
||||
/// Voice transcription (STT) config
|
||||
transcription: Option<crate::config::TranscriptionConfig>,
|
||||
/// Text-to-speech config for voice replies
|
||||
tts_config: Option<crate::config::TtsConfig>,
|
||||
/// Chats awaiting a voice reply — maps chat JID to the latest substantive
|
||||
/// reply text. A background task debounces and sends the voice note after
|
||||
/// the agent finishes its turn (no new send() for 3 seconds).
|
||||
pending_voice:
|
||||
Arc<std::sync::Mutex<std::collections::HashMap<String, (String, std::time::Instant)>>>,
|
||||
/// Chats whose last incoming message was a voice note.
|
||||
voice_chats: Arc<std::sync::Mutex<std::collections::HashSet<String>>>,
|
||||
}
|
||||
|
||||
impl WhatsAppWebChannel {
|
||||
@ -93,10 +102,13 @@ impl WhatsAppWebChannel {
|
||||
client: Arc::new(Mutex::new(None)),
|
||||
tx: Arc::new(Mutex::new(None)),
|
||||
transcription: None,
|
||||
tts_config: None,
|
||||
pending_voice: Arc::new(std::sync::Mutex::new(std::collections::HashMap::new())),
|
||||
voice_chats: Arc::new(std::sync::Mutex::new(std::collections::HashSet::new())),
|
||||
}
|
||||
}
|
||||
|
||||
/// Configure voice transcription.
|
||||
/// Configure voice transcription (STT) for incoming voice notes.
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
pub fn with_transcription(mut self, config: crate::config::TranscriptionConfig) -> Self {
|
||||
if config.enabled {
|
||||
@ -105,6 +117,15 @@ impl WhatsAppWebChannel {
|
||||
self
|
||||
}
|
||||
|
||||
/// Configure text-to-speech for outgoing voice replies.
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
pub fn with_tts(mut self, config: crate::config::TtsConfig) -> Self {
|
||||
if config.enabled {
|
||||
self.tts_config = Some(config);
|
||||
}
|
||||
self
|
||||
}
|
||||
|
||||
/// Check if a phone number is allowed (E.164 format: +1234567890)
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
fn is_number_allowed(&self, phone: &str) -> bool {
|
||||
@ -287,6 +308,134 @@ impl WhatsAppWebChannel {
|
||||
format!("{expanded_session_path}-shm"),
|
||||
]
|
||||
}
|
||||
|
||||
/// Attempt to download and transcribe a WhatsApp voice note.
|
||||
///
|
||||
/// Returns `None` if transcription is disabled, download fails, or
|
||||
/// transcription fails (all logged as warnings).
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
async fn try_transcribe_voice_note(
|
||||
client: &wa_rs::Client,
|
||||
audio: &wa_rs_proto::whatsapp::message::AudioMessage,
|
||||
transcription_config: Option<&crate::config::TranscriptionConfig>,
|
||||
) -> Option<String> {
|
||||
let config = transcription_config?;
|
||||
|
||||
// Enforce duration limit
|
||||
if let Some(seconds) = audio.seconds {
|
||||
if u64::from(seconds) > config.max_duration_secs {
|
||||
tracing::info!(
|
||||
"WhatsApp Web: skipping voice note ({}s exceeds {}s limit)",
|
||||
seconds,
|
||||
config.max_duration_secs
|
||||
);
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
// Download the encrypted audio
|
||||
use wa_rs::download::Downloadable;
|
||||
let audio_data = match client.download(audio as &dyn Downloadable).await {
|
||||
Ok(data) => data,
|
||||
Err(e) => {
|
||||
tracing::warn!("WhatsApp Web: failed to download voice note: {e}");
|
||||
return None;
|
||||
}
|
||||
};
|
||||
|
||||
// Determine filename from mimetype for transcription API
|
||||
let file_name = match audio.mimetype.as_deref() {
|
||||
Some(m) if m.contains("opus") || m.contains("ogg") => "voice.ogg",
|
||||
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
|
||||
Some(m) if m.contains("mpeg") || m.contains("mp3") => "voice.mp3",
|
||||
Some(m) if m.contains("webm") => "voice.webm",
|
||||
_ => "voice.ogg", // WhatsApp default
|
||||
};
|
||||
|
||||
tracing::info!(
|
||||
"WhatsApp Web: transcribing voice note ({} bytes, file={})",
|
||||
audio_data.len(),
|
||||
file_name
|
||||
);
|
||||
|
||||
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
|
||||
Ok(text) if text.trim().is_empty() => {
|
||||
tracing::info!("WhatsApp Web: voice transcription returned empty text, skipping");
|
||||
None
|
||||
}
|
||||
Ok(text) => {
|
||||
tracing::info!(
|
||||
"WhatsApp Web: voice note transcribed ({} chars)",
|
||||
text.len()
|
||||
);
|
||||
Some(text)
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("WhatsApp Web: voice transcription failed: {e}");
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Synthesize text to speech and send as a WhatsApp voice note (static version for spawned tasks).
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
async fn synthesize_voice_static(
|
||||
client: &wa_rs::Client,
|
||||
to: &wa_rs_binary::jid::Jid,
|
||||
text: &str,
|
||||
tts_config: &crate::config::TtsConfig,
|
||||
) -> Result<()> {
|
||||
let tts_manager = super::tts::TtsManager::new(tts_config)?;
|
||||
let audio_bytes = tts_manager.synthesize(text).await?;
|
||||
let audio_len = audio_bytes.len();
|
||||
tracing::info!("WhatsApp Web TTS: synthesized {} bytes of audio", audio_len);
|
||||
|
||||
if audio_bytes.is_empty() {
|
||||
anyhow::bail!("TTS returned empty audio");
|
||||
}
|
||||
|
||||
use wa_rs_core::download::MediaType;
|
||||
let upload = client
|
||||
.upload(audio_bytes, MediaType::Audio)
|
||||
.await
|
||||
.map_err(|e| anyhow!("Failed to upload TTS audio: {e}"))?;
|
||||
|
||||
tracing::info!(
|
||||
"WhatsApp Web TTS: uploaded audio (url_len={}, file_length={})",
|
||||
upload.url.len(),
|
||||
upload.file_length
|
||||
);
|
||||
|
||||
// Estimate duration: Opus at ~32kbps → bytes / 4000 ≈ seconds
|
||||
#[allow(clippy::cast_possible_truncation)]
|
||||
let estimated_seconds = std::cmp::max(1, (upload.file_length / 4000) as u32);
|
||||
|
||||
let voice_msg = wa_rs_proto::whatsapp::Message {
|
||||
audio_message: Some(Box::new(wa_rs_proto::whatsapp::message::AudioMessage {
|
||||
url: Some(upload.url),
|
||||
direct_path: Some(upload.direct_path),
|
||||
media_key: Some(upload.media_key),
|
||||
file_enc_sha256: Some(upload.file_enc_sha256),
|
||||
file_sha256: Some(upload.file_sha256),
|
||||
file_length: Some(upload.file_length),
|
||||
mimetype: Some("audio/ogg; codecs=opus".to_string()),
|
||||
ptt: Some(true),
|
||||
seconds: Some(estimated_seconds),
|
||||
..Default::default()
|
||||
})),
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
Box::pin(client.send_message(to.clone(), voice_msg))
|
||||
.await
|
||||
.map_err(|e| anyhow!("Failed to send voice note: {e}"))?;
|
||||
tracing::info!(
|
||||
"WhatsApp Web TTS: sent voice note ({} bytes, ~{}s)",
|
||||
audio_len,
|
||||
estimated_seconds
|
||||
);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "whatsapp-web")]
|
||||
@ -315,6 +464,88 @@ impl Channel for WhatsAppWebChannel {
|
||||
}
|
||||
|
||||
let to = self.recipient_to_jid(&message.recipient)?;
|
||||
|
||||
// Voice chat mode: send text normally AND queue a voice note of the
|
||||
// final answer. Only substantive messages (not tool outputs) are queued.
|
||||
// A debounce task waits 10s after the last substantive message, then
|
||||
// sends ONE voice note. Text in → text out. Voice in → text + voice out.
|
||||
let is_voice_chat = self
|
||||
.voice_chats
|
||||
.lock()
|
||||
.map(|vs| vs.contains(&message.recipient))
|
||||
.unwrap_or(false);
|
||||
|
||||
if is_voice_chat && self.tts_config.is_some() {
|
||||
let content = &message.content;
|
||||
// Only queue substantive natural-language replies for voice.
|
||||
// Skip tool outputs: URLs, JSON, code blocks, errors, short status.
|
||||
let is_substantive = content.len() > 40
|
||||
&& !content.starts_with("http")
|
||||
&& !content.starts_with('{')
|
||||
&& !content.starts_with('[')
|
||||
&& !content.starts_with("Error")
|
||||
&& !content.contains("```")
|
||||
&& !content.contains("tool_call")
|
||||
&& !content.contains("wttr.in");
|
||||
|
||||
if is_substantive {
|
||||
if let Ok(mut pv) = self.pending_voice.lock() {
|
||||
pv.insert(
|
||||
message.recipient.clone(),
|
||||
(content.clone(), std::time::Instant::now()),
|
||||
);
|
||||
}
|
||||
|
||||
let pending = self.pending_voice.clone();
|
||||
let voice_chats = self.voice_chats.clone();
|
||||
let client_clone = client.clone();
|
||||
let to_clone = to.clone();
|
||||
let recipient = message.recipient.clone();
|
||||
let tts_config = self.tts_config.clone().unwrap();
|
||||
tokio::spawn(async move {
|
||||
// Wait 10 seconds — long enough for the agent to finish its
|
||||
// full tool chain and send the final answer.
|
||||
tokio::time::sleep(tokio::time::Duration::from_secs(10)).await;
|
||||
|
||||
// Atomic check-and-remove: only one task gets the value
|
||||
let to_voice = pending.lock().ok().and_then(|mut pv| {
|
||||
if let Some((_, ts)) = pv.get(&recipient) {
|
||||
if ts.elapsed().as_secs() >= 8 {
|
||||
return pv.remove(&recipient).map(|(text, _)| text);
|
||||
}
|
||||
}
|
||||
None
|
||||
});
|
||||
|
||||
if let Some(text) = to_voice {
|
||||
if let Ok(mut vc) = voice_chats.lock() {
|
||||
vc.remove(&recipient);
|
||||
}
|
||||
match Box::pin(WhatsAppWebChannel::synthesize_voice_static(
|
||||
&client_clone,
|
||||
&to_clone,
|
||||
&text,
|
||||
&tts_config,
|
||||
))
|
||||
.await
|
||||
{
|
||||
Ok(()) => {
|
||||
tracing::info!(
|
||||
"WhatsApp Web: voice reply sent ({} chars)",
|
||||
text.len()
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("WhatsApp Web: TTS voice reply failed: {e}");
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
// Fall through to send text normally (voice chat gets BOTH)
|
||||
}
|
||||
|
||||
// Send text message
|
||||
let outgoing = wa_rs_proto::whatsapp::Message {
|
||||
conversation: Some(message.content.clone()),
|
||||
..Default::default()
|
||||
@ -322,7 +553,7 @@ impl Channel for WhatsAppWebChannel {
|
||||
|
||||
let message_id = client.send_message(to, outgoing).await?;
|
||||
tracing::debug!(
|
||||
"WhatsApp Web: sent message to {} (id: {})",
|
||||
"WhatsApp Web: sent text to {} (id: {})",
|
||||
message.recipient,
|
||||
message_id
|
||||
);
|
||||
@ -394,6 +625,9 @@ impl Channel for WhatsAppWebChannel {
|
||||
let session_revoked_clone = session_revoked.clone();
|
||||
let transcription_config = self.transcription.clone();
|
||||
|
||||
let transcription_config = self.transcription.clone();
|
||||
let voice_chats = self.voice_chats.clone();
|
||||
|
||||
let mut builder = Bot::builder()
|
||||
.with_backend(backend)
|
||||
.with_transport_factory(transport_factory)
|
||||
@ -405,27 +639,15 @@ impl Channel for WhatsAppWebChannel {
|
||||
let retry_count = retry_count_clone.clone();
|
||||
let session_revoked = session_revoked_clone.clone();
|
||||
let transcription_config = transcription_config.clone();
|
||||
let voice_chats = voice_chats.clone();
|
||||
async move {
|
||||
match event {
|
||||
Event::Message(msg, info) => {
|
||||
// Extract message content
|
||||
let text = msg.text_content().unwrap_or("");
|
||||
let sender_jid = info.source.sender.clone();
|
||||
let sender_alt = info.source.sender_alt.clone();
|
||||
let sender = sender_jid.user().to_string();
|
||||
let chat = info.source.chat.to_string();
|
||||
|
||||
tracing::info!(
|
||||
"WhatsApp Web message received (sender_len={}, chat_len={}, text_len={})",
|
||||
sender.len(),
|
||||
chat.len(),
|
||||
text.len()
|
||||
);
|
||||
tracing::debug!(
|
||||
"WhatsApp Web message content: {}",
|
||||
text
|
||||
);
|
||||
|
||||
let mapped_phone = if sender_jid.is_lid() {
|
||||
client.get_phone_number_from_lid(&sender_jid.user).await
|
||||
} else {
|
||||
@ -437,93 +659,92 @@ impl Channel for WhatsAppWebChannel {
|
||||
mapped_phone.as_deref(),
|
||||
);
|
||||
|
||||
if let Some(normalized) = sender_candidates
|
||||
let normalized = match sender_candidates
|
||||
.iter()
|
||||
.find(|candidate| {
|
||||
Self::is_number_allowed_for_list(&allowed_numbers, candidate)
|
||||
})
|
||||
.cloned()
|
||||
{
|
||||
let content = if !text.trim().is_empty() {
|
||||
text.trim().to_string()
|
||||
} else if let Some(ref audio) = msg.get_base_message().audio_message {
|
||||
let duration = audio.seconds.unwrap_or(0);
|
||||
tracing::info!(
|
||||
"WhatsApp Web audio from {} ({}s, ptt={})",
|
||||
normalized, duration, audio.ptt.unwrap_or(false)
|
||||
Some(n) => n,
|
||||
None => {
|
||||
tracing::warn!(
|
||||
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
|
||||
sender_candidates.len()
|
||||
);
|
||||
|
||||
let config = match transcription_config.as_ref() {
|
||||
Some(c) => c,
|
||||
None => {
|
||||
tracing::debug!("WhatsApp Web: transcription disabled, ignoring audio");
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
if u64::from(duration) > config.max_duration_secs {
|
||||
tracing::info!(
|
||||
"WhatsApp Web: skipping audio ({}s > {}s limit)",
|
||||
duration, config.max_duration_secs
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
let audio_data = match client.download(audio.as_ref()).await {
|
||||
Ok(d) => d,
|
||||
Err(e) => {
|
||||
tracing::warn!("WhatsApp Web: failed to download audio: {e}");
|
||||
return;
|
||||
}
|
||||
};
|
||||
|
||||
let file_name = match audio.mimetype.as_deref() {
|
||||
Some(m) if m.contains("ogg") => "voice.ogg",
|
||||
Some(m) if m.contains("opus") => "voice.opus",
|
||||
Some(m) if m.contains("mp4") || m.contains("m4a") => "voice.m4a",
|
||||
Some(m) if m.contains("webm") => "voice.webm",
|
||||
_ => "voice.ogg",
|
||||
};
|
||||
|
||||
match super::transcription::transcribe_audio(audio_data, file_name, config).await {
|
||||
Ok(t) if !t.trim().is_empty() => {
|
||||
tracing::info!("WhatsApp Web: transcribed audio from {}: {}", normalized, t.trim());
|
||||
t.trim().to_string()
|
||||
}
|
||||
Ok(_) => {
|
||||
tracing::info!("WhatsApp Web: transcription returned empty text");
|
||||
return;
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!("WhatsApp Web: transcription failed: {e}");
|
||||
return;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
tracing::debug!("WhatsApp Web: ignoring non-text/non-audio message from {}", normalized);
|
||||
return;
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
if let Err(e) = tx_inner
|
||||
.send(ChannelMessage {
|
||||
id: uuid::Uuid::new_v4().to_string(),
|
||||
channel: "whatsapp".to_string(),
|
||||
sender: normalized.clone(),
|
||||
// Reply to the originating chat JID (DM or group).
|
||||
reply_target: chat,
|
||||
content,
|
||||
timestamp: chrono::Utc::now().timestamp() as u64,
|
||||
thread_ts: None,
|
||||
})
|
||||
// Attempt voice note transcription (ptt = push-to-talk = voice note)
|
||||
let voice_text = if let Some(ref audio) = msg.audio_message {
|
||||
if audio.ptt == Some(true) {
|
||||
Self::try_transcribe_voice_note(
|
||||
&client,
|
||||
audio,
|
||||
transcription_config.as_ref(),
|
||||
)
|
||||
.await
|
||||
{
|
||||
tracing::error!("Failed to send message to channel: {}", e);
|
||||
} else {
|
||||
tracing::debug!(
|
||||
"WhatsApp Web: ignoring non-PTT audio message from {}",
|
||||
normalized
|
||||
);
|
||||
None
|
||||
}
|
||||
} else {
|
||||
tracing::warn!(
|
||||
"WhatsApp Web: message from unrecognized sender not in allowed list (candidates_count={})",
|
||||
sender_candidates.len()
|
||||
None
|
||||
};
|
||||
|
||||
// Use transcribed voice text, or fall back to text content.
|
||||
// Track whether this chat used a voice note so we reply in kind.
|
||||
// We store the chat JID (reply_target) since that's what send() receives.
|
||||
let content = if let Some(ref vt) = voice_text {
|
||||
if let Ok(mut vs) = voice_chats.lock() {
|
||||
vs.insert(chat.clone());
|
||||
}
|
||||
format!("[Voice] {vt}")
|
||||
} else {
|
||||
if let Ok(mut vs) = voice_chats.lock() {
|
||||
vs.remove(&chat);
|
||||
}
|
||||
let text = msg.text_content().unwrap_or("");
|
||||
text.trim().to_string()
|
||||
};
|
||||
|
||||
tracing::info!(
|
||||
"WhatsApp Web message received (sender_len={}, chat_len={}, content_len={})",
|
||||
sender.len(),
|
||||
chat.len(),
|
||||
content.len()
|
||||
);
|
||||
tracing::debug!(
|
||||
"WhatsApp Web message content: {}",
|
||||
content
|
||||
);
|
||||
|
||||
if content.is_empty() {
|
||||
tracing::debug!(
|
||||
"WhatsApp Web: ignoring empty or non-text message from {}",
|
||||
normalized
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
if let Err(e) = tx_inner
|
||||
.send(ChannelMessage {
|
||||
id: uuid::Uuid::new_v4().to_string(),
|
||||
channel: "whatsapp".to_string(),
|
||||
sender: normalized.clone(),
|
||||
// Reply to the originating chat JID (DM or group).
|
||||
reply_target: chat,
|
||||
content,
|
||||
timestamp: chrono::Utc::now().timestamp() as u64,
|
||||
thread_ts: None,
|
||||
})
|
||||
.await
|
||||
{
|
||||
tracing::error!("Failed to send message to channel: {}", e);
|
||||
}
|
||||
}
|
||||
Event::Connected(_) => {
|
||||
@ -764,6 +985,10 @@ impl WhatsAppWebChannel {
|
||||
pub fn with_transcription(self, _config: crate::config::TranscriptionConfig) -> Self {
|
||||
self
|
||||
}
|
||||
|
||||
pub fn with_tts(self, _config: crate::config::TtsConfig) -> Self {
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(feature = "whatsapp-web"))]
|
||||
|
||||
@ -13,7 +13,9 @@ pub use schema::{
|
||||
DockerRuntimeConfig, EdgeTtsConfig, ElevenLabsTtsConfig, EmbeddingRouteConfig, EstopConfig,
|
||||
FeishuConfig, GatewayConfig, GoogleSttConfig, GoogleTtsConfig, GoogleWorkspaceConfig,
|
||||
HardwareConfig, HardwareTransport, HeartbeatConfig, HooksConfig, HttpRequestConfig,
|
||||
IMessageConfig, IdentityConfig, KnowledgeConfig, LarkConfig, MatrixConfig, McpConfig,
|
||||
IMessageConfig, IdentityConfig, ImageProviderDalleConfig, ImageProviderFluxConfig,
|
||||
ImageProviderImagenConfig, ImageProviderStabilityConfig, KnowledgeConfig, LarkConfig,
|
||||
LinkedInConfig, LinkedInContentConfig, LinkedInImageConfig, MatrixConfig, McpConfig,
|
||||
McpServerConfig, McpTransport, MemoryConfig, Microsoft365Config, ModelRouteConfig,
|
||||
MultimodalConfig, NextcloudTalkConfig, NodeTransportConfig, NodesConfig, NotionConfig,
|
||||
ObservabilityConfig, OpenAiSttConfig, OpenAiTtsConfig, OpenVpnTunnelConfig, OtpConfig,
|
||||
|
||||
@ -331,6 +331,10 @@ pub struct Config {
|
||||
/// Knowledge graph configuration (`[knowledge]`).
|
||||
#[serde(default)]
|
||||
pub knowledge: KnowledgeConfig,
|
||||
|
||||
/// LinkedIn integration configuration (`[linkedin]`).
|
||||
#[serde(default)]
|
||||
pub linkedin: LinkedInConfig,
|
||||
}
|
||||
|
||||
/// Multi-client workspace isolation configuration.
|
||||
@ -2223,6 +2227,272 @@ impl Default for KnowledgeConfig {
|
||||
}
|
||||
}
|
||||
|
||||
// ── LinkedIn ────────────────────────────────────────────────────
|
||||
|
||||
/// LinkedIn integration configuration (`[linkedin]` section).
|
||||
///
|
||||
/// When enabled, the `linkedin` tool is registered in the agent tool surface.
|
||||
/// Requires `LINKEDIN_*` credentials in the workspace `.env` file.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct LinkedInConfig {
|
||||
/// Enable the LinkedIn tool.
|
||||
#[serde(default)]
|
||||
pub enabled: bool,
|
||||
|
||||
/// LinkedIn REST API version header (YYYYMM format).
|
||||
#[serde(default = "default_linkedin_api_version")]
|
||||
pub api_version: String,
|
||||
|
||||
/// Content strategy for automated posting.
|
||||
#[serde(default)]
|
||||
pub content: LinkedInContentConfig,
|
||||
|
||||
/// Image generation for posts (`[linkedin.image]`).
|
||||
#[serde(default)]
|
||||
pub image: LinkedInImageConfig,
|
||||
}
|
||||
|
||||
impl Default for LinkedInConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
enabled: false,
|
||||
api_version: default_linkedin_api_version(),
|
||||
content: LinkedInContentConfig::default(),
|
||||
image: LinkedInImageConfig::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn default_linkedin_api_version() -> String {
|
||||
"202602".to_string()
|
||||
}
|
||||
|
||||
/// Content strategy configuration for LinkedIn auto-posting (`[linkedin.content]`).
|
||||
///
|
||||
/// The agent reads this via the `linkedin get_content_strategy` action to know
|
||||
/// what feeds to check, which repos to highlight, and how to write posts.
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct LinkedInContentConfig {
|
||||
/// RSS feed URLs to monitor for topic inspiration (titles only).
|
||||
#[serde(default)]
|
||||
pub rss_feeds: Vec<String>,
|
||||
|
||||
/// GitHub usernames whose public activity to reference.
|
||||
#[serde(default)]
|
||||
pub github_users: Vec<String>,
|
||||
|
||||
/// GitHub repositories to highlight (format: `owner/repo`).
|
||||
#[serde(default)]
|
||||
pub github_repos: Vec<String>,
|
||||
|
||||
/// Topics of expertise and interest for post themes.
|
||||
#[serde(default)]
|
||||
pub topics: Vec<String>,
|
||||
|
||||
/// Professional persona description (name, role, expertise).
|
||||
#[serde(default)]
|
||||
pub persona: String,
|
||||
|
||||
/// Freeform posting instructions for the AI agent.
|
||||
#[serde(default)]
|
||||
pub instructions: String,
|
||||
}
|
||||
|
||||
/// Image generation configuration for LinkedIn posts (`[linkedin.image]`).
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct LinkedInImageConfig {
|
||||
/// Enable image generation for posts.
|
||||
#[serde(default)]
|
||||
pub enabled: bool,
|
||||
|
||||
/// Provider priority order. Tried in sequence; first success wins.
|
||||
#[serde(default = "default_image_providers")]
|
||||
pub providers: Vec<String>,
|
||||
|
||||
/// Generate a branded SVG text card when all AI providers fail.
|
||||
#[serde(default = "default_true")]
|
||||
pub fallback_card: bool,
|
||||
|
||||
/// Accent color for the fallback card (CSS hex).
|
||||
#[serde(default = "default_card_accent_color")]
|
||||
pub card_accent_color: String,
|
||||
|
||||
/// Temp directory for generated images, relative to workspace.
|
||||
#[serde(default = "default_image_temp_dir")]
|
||||
pub temp_dir: String,
|
||||
|
||||
/// Stability AI provider settings.
|
||||
#[serde(default)]
|
||||
pub stability: ImageProviderStabilityConfig,
|
||||
|
||||
/// Google Imagen (Vertex AI) provider settings.
|
||||
#[serde(default)]
|
||||
pub imagen: ImageProviderImagenConfig,
|
||||
|
||||
/// OpenAI DALL-E provider settings.
|
||||
#[serde(default)]
|
||||
pub dalle: ImageProviderDalleConfig,
|
||||
|
||||
/// Flux (fal.ai) provider settings.
|
||||
#[serde(default)]
|
||||
pub flux: ImageProviderFluxConfig,
|
||||
}
|
||||
|
||||
fn default_image_providers() -> Vec<String> {
|
||||
vec![
|
||||
"stability".into(),
|
||||
"imagen".into(),
|
||||
"dalle".into(),
|
||||
"flux".into(),
|
||||
]
|
||||
}
|
||||
|
||||
fn default_card_accent_color() -> String {
|
||||
"#0A66C2".into()
|
||||
}
|
||||
|
||||
fn default_image_temp_dir() -> String {
|
||||
"linkedin/images".into()
|
||||
}
|
||||
|
||||
impl Default for LinkedInImageConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
enabled: false,
|
||||
providers: default_image_providers(),
|
||||
fallback_card: true,
|
||||
card_accent_color: default_card_accent_color(),
|
||||
temp_dir: default_image_temp_dir(),
|
||||
stability: ImageProviderStabilityConfig::default(),
|
||||
imagen: ImageProviderImagenConfig::default(),
|
||||
dalle: ImageProviderDalleConfig::default(),
|
||||
flux: ImageProviderFluxConfig::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Stability AI image generation settings (`[linkedin.image.stability]`).
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct ImageProviderStabilityConfig {
|
||||
/// Environment variable name holding the API key.
|
||||
#[serde(default = "default_stability_api_key_env")]
|
||||
pub api_key_env: String,
|
||||
/// Stability model identifier.
|
||||
#[serde(default = "default_stability_model")]
|
||||
pub model: String,
|
||||
}
|
||||
|
||||
fn default_stability_api_key_env() -> String {
|
||||
"STABILITY_API_KEY".into()
|
||||
}
|
||||
fn default_stability_model() -> String {
|
||||
"stable-diffusion-xl-1024-v1-0".into()
|
||||
}
|
||||
|
||||
impl Default for ImageProviderStabilityConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
api_key_env: default_stability_api_key_env(),
|
||||
model: default_stability_model(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Google Imagen (Vertex AI) settings (`[linkedin.image.imagen]`).
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct ImageProviderImagenConfig {
|
||||
/// Environment variable name holding the API key.
|
||||
#[serde(default = "default_imagen_api_key_env")]
|
||||
pub api_key_env: String,
|
||||
/// Environment variable for the Google Cloud project ID.
|
||||
#[serde(default = "default_imagen_project_id_env")]
|
||||
pub project_id_env: String,
|
||||
/// Vertex AI region.
|
||||
#[serde(default = "default_imagen_region")]
|
||||
pub region: String,
|
||||
}
|
||||
|
||||
fn default_imagen_api_key_env() -> String {
|
||||
"GOOGLE_VERTEX_API_KEY".into()
|
||||
}
|
||||
fn default_imagen_project_id_env() -> String {
|
||||
"GOOGLE_CLOUD_PROJECT".into()
|
||||
}
|
||||
fn default_imagen_region() -> String {
|
||||
"us-central1".into()
|
||||
}
|
||||
|
||||
impl Default for ImageProviderImagenConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
api_key_env: default_imagen_api_key_env(),
|
||||
project_id_env: default_imagen_project_id_env(),
|
||||
region: default_imagen_region(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// OpenAI DALL-E settings (`[linkedin.image.dalle]`).
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct ImageProviderDalleConfig {
|
||||
/// Environment variable name holding the OpenAI API key.
|
||||
#[serde(default = "default_dalle_api_key_env")]
|
||||
pub api_key_env: String,
|
||||
/// DALL-E model identifier.
|
||||
#[serde(default = "default_dalle_model")]
|
||||
pub model: String,
|
||||
/// Image dimensions.
|
||||
#[serde(default = "default_dalle_size")]
|
||||
pub size: String,
|
||||
}
|
||||
|
||||
fn default_dalle_api_key_env() -> String {
|
||||
"OPENAI_API_KEY".into()
|
||||
}
|
||||
fn default_dalle_model() -> String {
|
||||
"dall-e-3".into()
|
||||
}
|
||||
fn default_dalle_size() -> String {
|
||||
"1024x1024".into()
|
||||
}
|
||||
|
||||
impl Default for ImageProviderDalleConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
api_key_env: default_dalle_api_key_env(),
|
||||
model: default_dalle_model(),
|
||||
size: default_dalle_size(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Flux (fal.ai) image generation settings (`[linkedin.image.flux]`).
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct ImageProviderFluxConfig {
|
||||
/// Environment variable name holding the fal.ai API key.
|
||||
#[serde(default = "default_flux_api_key_env")]
|
||||
pub api_key_env: String,
|
||||
/// Flux model identifier.
|
||||
#[serde(default = "default_flux_model")]
|
||||
pub model: String,
|
||||
}
|
||||
|
||||
fn default_flux_api_key_env() -> String {
|
||||
"FAL_API_KEY".into()
|
||||
}
|
||||
fn default_flux_model() -> String {
|
||||
"fal-ai/flux/schnell".into()
|
||||
}
|
||||
|
||||
impl Default for ImageProviderFluxConfig {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
api_key_env: default_flux_api_key_env(),
|
||||
model: default_flux_model(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ── Proxy ───────────────────────────────────────────────────────
|
||||
|
||||
/// Proxy application scope — determines which outbound traffic uses the proxy.
|
||||
@ -5584,6 +5854,7 @@ impl Default for Config {
|
||||
notion: NotionConfig::default(),
|
||||
node_transport: NodeTransportConfig::default(),
|
||||
knowledge: KnowledgeConfig::default(),
|
||||
linkedin: LinkedInConfig::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -8007,6 +8278,7 @@ default_temperature = 0.7
|
||||
notion: NotionConfig::default(),
|
||||
node_transport: NodeTransportConfig::default(),
|
||||
knowledge: KnowledgeConfig::default(),
|
||||
linkedin: LinkedInConfig::default(),
|
||||
};
|
||||
|
||||
let toml_str = toml::to_string_pretty(&config).unwrap();
|
||||
@ -8312,6 +8584,7 @@ tool_dispatcher = "xml"
|
||||
notion: NotionConfig::default(),
|
||||
node_transport: NodeTransportConfig::default(),
|
||||
knowledge: KnowledgeConfig::default(),
|
||||
linkedin: LinkedInConfig::default(),
|
||||
};
|
||||
|
||||
config.save().await.unwrap();
|
||||
|
||||
@ -191,6 +191,7 @@ pub async fn run_wizard(force: bool) -> Result<Config> {
|
||||
notion: crate::config::NotionConfig::default(),
|
||||
node_transport: crate::config::NodeTransportConfig::default(),
|
||||
knowledge: crate::config::KnowledgeConfig::default(),
|
||||
linkedin: crate::config::LinkedInConfig::default(),
|
||||
};
|
||||
|
||||
println!(
|
||||
@ -563,6 +564,7 @@ async fn run_quick_setup_with_home(
|
||||
notion: crate::config::NotionConfig::default(),
|
||||
node_transport: crate::config::NodeTransportConfig::default(),
|
||||
knowledge: crate::config::KnowledgeConfig::default(),
|
||||
linkedin: crate::config::LinkedInConfig::default(),
|
||||
};
|
||||
|
||||
config.save().await?;
|
||||
|
||||
@ -818,6 +818,26 @@ fn resolve_provider_credential(name: &str, credential_override: Option<&str>) ->
|
||||
if let Some(credential) = resolve_minimax_oauth_refresh_token(name) {
|
||||
return Some(credential);
|
||||
}
|
||||
} else if name == "anthropic" || name == "openai" || name == "groq" {
|
||||
// For well-known providers, prefer provider-specific env vars over the
|
||||
// global api_key override, since the global key may belong to a different
|
||||
// provider (e.g. a custom: gateway). This enables multi-provider setups
|
||||
// where the primary uses a custom gateway and fallbacks use named providers.
|
||||
let env_candidates: &[&str] = match name {
|
||||
"anthropic" => &["ANTHROPIC_OAUTH_TOKEN", "ANTHROPIC_API_KEY"],
|
||||
"openai" => &["OPENAI_API_KEY"],
|
||||
"groq" => &["GROQ_API_KEY"],
|
||||
_ => &[],
|
||||
};
|
||||
for env_var in env_candidates {
|
||||
if let Ok(val) = std::env::var(env_var) {
|
||||
let trimmed = val.trim().to_string();
|
||||
if !trimmed.is_empty() {
|
||||
return Some(trimmed);
|
||||
}
|
||||
}
|
||||
}
|
||||
return Some(trimmed_override.to_owned());
|
||||
} else {
|
||||
return Some(trimmed_override.to_owned());
|
||||
}
|
||||
|
||||
804
src/tools/linkedin.rs
Normal file
804
src/tools/linkedin.rs
Normal file
@ -0,0 +1,804 @@
|
||||
use super::linkedin_client::{ImageGenerator, LinkedInClient};
|
||||
use super::traits::{Tool, ToolResult};
|
||||
use crate::config::{LinkedInContentConfig, LinkedInImageConfig};
|
||||
use crate::security::SecurityPolicy;
|
||||
use async_trait::async_trait;
|
||||
use serde_json::json;
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
|
||||
pub struct LinkedInTool {
|
||||
security: Arc<SecurityPolicy>,
|
||||
workspace_dir: PathBuf,
|
||||
api_version: String,
|
||||
content_config: LinkedInContentConfig,
|
||||
image_config: LinkedInImageConfig,
|
||||
}
|
||||
|
||||
impl LinkedInTool {
|
||||
pub fn new(
|
||||
security: Arc<SecurityPolicy>,
|
||||
workspace_dir: PathBuf,
|
||||
api_version: String,
|
||||
content_config: LinkedInContentConfig,
|
||||
image_config: LinkedInImageConfig,
|
||||
) -> Self {
|
||||
Self {
|
||||
security,
|
||||
workspace_dir,
|
||||
api_version,
|
||||
content_config,
|
||||
image_config,
|
||||
}
|
||||
}
|
||||
|
||||
fn is_write_action(action: &str) -> bool {
|
||||
matches!(action, "create_post" | "comment" | "react" | "delete_post")
|
||||
}
|
||||
|
||||
fn build_content_strategy_summary(&self) -> String {
|
||||
let c = &self.content_config;
|
||||
let mut parts = Vec::new();
|
||||
|
||||
if !c.persona.is_empty() {
|
||||
parts.push(format!("## Persona\n{}", c.persona));
|
||||
}
|
||||
|
||||
if !c.topics.is_empty() {
|
||||
parts.push(format!("## Topics\n{}", c.topics.join(", ")));
|
||||
}
|
||||
|
||||
if !c.rss_feeds.is_empty() {
|
||||
let feeds: Vec<String> = c.rss_feeds.iter().map(|f| format!("- {f}")).collect();
|
||||
parts.push(format!(
|
||||
"## RSS Feeds (fetch titles only for inspiration)\n{}",
|
||||
feeds.join("\n")
|
||||
));
|
||||
}
|
||||
|
||||
if !c.github_users.is_empty() {
|
||||
parts.push(format!(
|
||||
"## GitHub Users (check public activity)\n{}",
|
||||
c.github_users.join(", ")
|
||||
));
|
||||
}
|
||||
|
||||
if !c.github_repos.is_empty() {
|
||||
let repos: Vec<String> = c.github_repos.iter().map(|r| format!("- {r}")).collect();
|
||||
parts.push(format!(
|
||||
"## GitHub Repos (highlight project work)\n{}",
|
||||
repos.join("\n")
|
||||
));
|
||||
}
|
||||
|
||||
if !c.instructions.is_empty() {
|
||||
parts.push(format!("## Posting Instructions\n{}", c.instructions));
|
||||
}
|
||||
|
||||
if parts.is_empty() {
|
||||
return "No content strategy configured. Add [linkedin.content] settings to config.toml with rss_feeds, github_repos, persona, topics, and instructions.".to_string();
|
||||
}
|
||||
|
||||
parts.join("\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl Tool for LinkedInTool {
|
||||
fn name(&self) -> &str {
|
||||
"linkedin"
|
||||
}
|
||||
|
||||
fn description(&self) -> &str {
|
||||
"Manage LinkedIn: create posts, list your posts, comment, react, delete posts, view engagement, get profile info, and read the configured content strategy. Requires LINKEDIN_* credentials in .env file."
|
||||
}
|
||||
|
||||
fn parameters_schema(&self) -> serde_json::Value {
|
||||
json!({
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": [
|
||||
"create_post",
|
||||
"list_posts",
|
||||
"comment",
|
||||
"react",
|
||||
"delete_post",
|
||||
"get_engagement",
|
||||
"get_profile",
|
||||
"get_content_strategy"
|
||||
],
|
||||
"description": "The LinkedIn action to perform"
|
||||
},
|
||||
"text": {
|
||||
"type": "string",
|
||||
"description": "Post or comment text content"
|
||||
},
|
||||
"visibility": {
|
||||
"type": "string",
|
||||
"enum": ["PUBLIC", "CONNECTIONS"],
|
||||
"description": "Post visibility (default: PUBLIC)"
|
||||
},
|
||||
"article_url": {
|
||||
"type": "string",
|
||||
"description": "URL for link preview in a post"
|
||||
},
|
||||
"article_title": {
|
||||
"type": "string",
|
||||
"description": "Title for the article (requires article_url)"
|
||||
},
|
||||
"post_id": {
|
||||
"type": "string",
|
||||
"description": "LinkedIn post URN identifier"
|
||||
},
|
||||
"reaction_type": {
|
||||
"type": "string",
|
||||
"enum": ["LIKE", "CELEBRATE", "SUPPORT", "LOVE", "INSIGHTFUL", "FUNNY"],
|
||||
"description": "Type of reaction to add to a post"
|
||||
},
|
||||
"count": {
|
||||
"type": "integer",
|
||||
"description": "Number of posts to retrieve (default 10, max 50)"
|
||||
},
|
||||
"generate_image": {
|
||||
"type": "boolean",
|
||||
"description": "Generate an AI image for the post (requires [linkedin.image] config). Falls back to branded SVG card if all providers fail."
|
||||
},
|
||||
"image_prompt": {
|
||||
"type": "string",
|
||||
"description": "Custom prompt for image generation. If omitted, a prompt is derived from the post text."
|
||||
},
|
||||
"scheduled_at": {
|
||||
"type": "string",
|
||||
"description": "Schedule the post for future publication. ISO 8601 / RFC 3339 timestamp, e.g. '2026-03-17T08:00:00Z'. The post is saved as a draft with scheduledPublishTime on LinkedIn."
|
||||
}
|
||||
},
|
||||
"required": ["action"]
|
||||
})
|
||||
}
|
||||
|
||||
async fn execute(&self, args: serde_json::Value) -> anyhow::Result<ToolResult> {
|
||||
let action = args
|
||||
.get("action")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or_else(|| anyhow::anyhow!("Missing required 'action' parameter"))?;
|
||||
|
||||
// Write actions require autonomy check
|
||||
if Self::is_write_action(action) && !self.security.can_act() {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Action blocked: autonomy is read-only".into()),
|
||||
});
|
||||
}
|
||||
|
||||
// All actions are rate-limited
|
||||
if !self.security.record_action() {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Action blocked: rate limit exceeded".into()),
|
||||
});
|
||||
}
|
||||
|
||||
let client = LinkedInClient::new(self.workspace_dir.clone(), self.api_version.clone());
|
||||
|
||||
match action {
|
||||
"get_content_strategy" => {
|
||||
let strategy = self.build_content_strategy_summary();
|
||||
return Ok(ToolResult {
|
||||
success: true,
|
||||
output: strategy,
|
||||
error: None,
|
||||
});
|
||||
}
|
||||
"create_post" => {
|
||||
let text = match args.get("text").and_then(|v| v.as_str()).map(str::trim) {
|
||||
Some(t) if !t.is_empty() => t.to_string(),
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Missing required 'text' parameter for create_post".into()),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let visibility = args
|
||||
.get("visibility")
|
||||
.and_then(|v| v.as_str())
|
||||
.unwrap_or("PUBLIC");
|
||||
|
||||
let generate_image = args
|
||||
.get("generate_image")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
let article_url = args.get("article_url").and_then(|v| v.as_str());
|
||||
let article_title = args.get("article_title").and_then(|v| v.as_str());
|
||||
let scheduled_at = args.get("scheduled_at").and_then(|v| v.as_str());
|
||||
|
||||
if article_title.is_some() && article_url.is_none() {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("'article_title' requires 'article_url' to be provided".into()),
|
||||
});
|
||||
}
|
||||
|
||||
// Image generation flow
|
||||
if generate_image && self.image_config.enabled {
|
||||
let image_prompt =
|
||||
args.get("image_prompt")
|
||||
.and_then(|v| v.as_str())
|
||||
.map(String::from)
|
||||
.unwrap_or_else(|| {
|
||||
format!(
|
||||
"Professional, modern illustration for a LinkedIn post about: {}",
|
||||
if text.len() > 200 { &text[..200] } else { &text }
|
||||
)
|
||||
});
|
||||
|
||||
let generator =
|
||||
ImageGenerator::new(self.image_config.clone(), self.workspace_dir.clone());
|
||||
|
||||
match generator.generate(&image_prompt).await {
|
||||
Ok(image_path) => {
|
||||
let image_bytes = tokio::fs::read(&image_path).await?;
|
||||
let creds = client.get_credentials().await?;
|
||||
let image_urn = client
|
||||
.upload_image(&image_bytes, &creds.access_token, &creds.person_id)
|
||||
.await?;
|
||||
|
||||
let post_id = client
|
||||
.create_post_with_image(&text, visibility, &image_urn, scheduled_at)
|
||||
.await?;
|
||||
|
||||
// Clean up temp file
|
||||
let _ = ImageGenerator::cleanup(&image_path).await;
|
||||
|
||||
let action_word = if scheduled_at.is_some() {
|
||||
"scheduled"
|
||||
} else {
|
||||
"published"
|
||||
};
|
||||
return Ok(ToolResult {
|
||||
success: true,
|
||||
output: format!(
|
||||
"Post {action_word} with image. Post ID: {post_id}, Image: {image_urn}"
|
||||
),
|
||||
error: None,
|
||||
});
|
||||
}
|
||||
Err(e) => {
|
||||
// Image generation failed entirely — post without image
|
||||
tracing::warn!("Image generation failed, posting without image: {e}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let post_id = client
|
||||
.create_post(&text, visibility, article_url, article_title, scheduled_at)
|
||||
.await?;
|
||||
|
||||
let action_word = if scheduled_at.is_some() {
|
||||
"scheduled"
|
||||
} else {
|
||||
"published"
|
||||
};
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: format!("Post {action_word} successfully. Post ID: {post_id}"),
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"list_posts" => {
|
||||
let count = args
|
||||
.get("count")
|
||||
.and_then(|v| v.as_u64())
|
||||
.unwrap_or(10)
|
||||
.clamp(1, 50) as usize;
|
||||
|
||||
let posts = client.list_posts(count).await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string(&posts)?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"comment" => {
|
||||
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
|
||||
Some(id) if !id.is_empty() => id,
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Missing required 'post_id' parameter for comment".into()),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let text = match args.get("text").and_then(|v| v.as_str()).map(str::trim) {
|
||||
Some(t) if !t.is_empty() => t.to_string(),
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Missing required 'text' parameter for comment".into()),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let comment_id = client.add_comment(post_id, &text).await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: format!("Comment posted successfully. Comment ID: {comment_id}"),
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"react" => {
|
||||
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
|
||||
Some(id) if !id.is_empty() => id,
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some("Missing required 'post_id' parameter for react".into()),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let reaction_type = match args.get("reaction_type").and_then(|v| v.as_str()) {
|
||||
Some(rt) if !rt.is_empty() => rt,
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(
|
||||
"Missing required 'reaction_type' parameter for react".into(),
|
||||
),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
client.add_reaction(post_id, reaction_type).await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: format!("Reaction '{reaction_type}' added to post {post_id}"),
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"delete_post" => {
|
||||
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
|
||||
Some(id) if !id.is_empty() => id,
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(
|
||||
"Missing required 'post_id' parameter for delete_post".into(),
|
||||
),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
client.delete_post(post_id).await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: format!("Post {post_id} deleted successfully"),
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"get_engagement" => {
|
||||
let post_id = match args.get("post_id").and_then(|v| v.as_str()) {
|
||||
Some(id) if !id.is_empty() => id,
|
||||
_ => {
|
||||
return Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(
|
||||
"Missing required 'post_id' parameter for get_engagement".into(),
|
||||
),
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
let engagement = client.get_engagement(post_id).await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string(&engagement)?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
"get_profile" => {
|
||||
let profile = client.get_profile().await?;
|
||||
|
||||
Ok(ToolResult {
|
||||
success: true,
|
||||
output: serde_json::to_string(&profile)?,
|
||||
error: None,
|
||||
})
|
||||
}
|
||||
|
||||
unknown => Ok(ToolResult {
|
||||
success: false,
|
||||
output: String::new(),
|
||||
error: Some(format!("Unknown action: '{unknown}'")),
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::security::AutonomyLevel;
|
||||
|
||||
fn test_security(level: AutonomyLevel, max_actions_per_hour: u32) -> Arc<SecurityPolicy> {
|
||||
Arc::new(SecurityPolicy {
|
||||
autonomy: level,
|
||||
max_actions_per_hour,
|
||||
workspace_dir: std::env::temp_dir(),
|
||||
..SecurityPolicy::default()
|
||||
})
|
||||
}
|
||||
|
||||
fn make_tool(level: AutonomyLevel, max_actions: u32) -> LinkedInTool {
|
||||
LinkedInTool::new(
|
||||
test_security(level, max_actions),
|
||||
PathBuf::from("/tmp"),
|
||||
"202602".to_string(),
|
||||
LinkedInContentConfig::default(),
|
||||
LinkedInImageConfig::default(),
|
||||
)
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_name() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
assert_eq!(tool.name(), "linkedin");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_description() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
assert!(!tool.description().is_empty());
|
||||
assert!(tool.description().contains("LinkedIn"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parameters_schema_has_required_action() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
let schema = tool.parameters_schema();
|
||||
assert_eq!(schema["type"], "object");
|
||||
let required = schema["required"].as_array().unwrap();
|
||||
assert!(required.contains(&json!("action")));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parameters_schema_has_all_properties() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
let schema = tool.parameters_schema();
|
||||
let props = &schema["properties"];
|
||||
assert!(props.get("action").is_some());
|
||||
assert!(props.get("text").is_some());
|
||||
assert!(props.get("visibility").is_some());
|
||||
assert!(props.get("article_url").is_some());
|
||||
assert!(props.get("article_title").is_some());
|
||||
assert!(props.get("post_id").is_some());
|
||||
assert!(props.get("reaction_type").is_some());
|
||||
assert!(props.get("count").is_some());
|
||||
assert!(props.get("generate_image").is_some());
|
||||
assert!(props.get("image_prompt").is_some());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn write_actions_blocked_in_readonly_mode() {
|
||||
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
|
||||
|
||||
for action in &["create_post", "comment", "react", "delete_post"] {
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": action,
|
||||
"text": "hello",
|
||||
"post_id": "urn:li:share:123",
|
||||
"reaction_type": "LIKE"
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(
|
||||
!result.success,
|
||||
"Action '{action}' should be blocked in read-only mode"
|
||||
);
|
||||
assert!(
|
||||
result.error.as_ref().unwrap().contains("read-only"),
|
||||
"Action '{action}' error should mention read-only"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn write_actions_blocked_by_rate_limit() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 0);
|
||||
|
||||
for action in &["create_post", "comment", "react", "delete_post"] {
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": action,
|
||||
"text": "hello",
|
||||
"post_id": "urn:li:share:123",
|
||||
"reaction_type": "LIKE"
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(
|
||||
!result.success,
|
||||
"Action '{action}' should be blocked by rate limit"
|
||||
);
|
||||
assert!(
|
||||
result.error.as_ref().unwrap().contains("rate limit"),
|
||||
"Action '{action}' error should mention rate limit"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn read_actions_not_blocked_in_readonly_mode() {
|
||||
// Read actions skip can_act() but still go through record_action().
|
||||
// With rate limit > 0, they should pass security checks and only fail
|
||||
// at the client level (no .env file).
|
||||
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
|
||||
|
||||
for action in &["list_posts", "get_engagement", "get_profile"] {
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": action,
|
||||
"post_id": "urn:li:share:123"
|
||||
}))
|
||||
.await;
|
||||
// These will fail at the client level (no .env), but they should NOT
|
||||
// return a read-only security error.
|
||||
match result {
|
||||
Ok(r) => {
|
||||
if !r.success {
|
||||
assert!(
|
||||
!r.error.as_ref().unwrap().contains("read-only"),
|
||||
"Read action '{action}' should not be blocked by read-only mode"
|
||||
);
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
// Client-level error (no .env) is expected and acceptable
|
||||
let msg = e.to_string();
|
||||
assert!(
|
||||
!msg.contains("read-only"),
|
||||
"Read action '{action}' should not be blocked by read-only mode"
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn read_actions_blocked_by_rate_limit() {
|
||||
let tool = make_tool(AutonomyLevel::ReadOnly, 0);
|
||||
|
||||
for action in &["list_posts", "get_engagement", "get_profile"] {
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": action,
|
||||
"post_id": "urn:li:share:123"
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(
|
||||
!result.success,
|
||||
"Read action '{action}' should be rate-limited"
|
||||
);
|
||||
assert!(
|
||||
result.error.as_ref().unwrap().contains("rate limit"),
|
||||
"Read action '{action}' error should mention rate limit"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn create_post_requires_text() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "create_post"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("text"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn create_post_rejects_empty_text() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "create_post", "text": " "}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("text"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn article_title_without_url_rejected() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({
|
||||
"action": "create_post",
|
||||
"text": "Hello world",
|
||||
"article_title": "My Article"
|
||||
}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("article_url"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn comment_requires_post_id() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "comment", "text": "Nice post!"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("post_id"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn comment_requires_text() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "comment", "post_id": "urn:li:share:123"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("text"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn react_requires_post_id() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "react", "reaction_type": "LIKE"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("post_id"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn react_requires_reaction_type() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "react", "post_id": "urn:li:share:123"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("reaction_type"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn delete_post_requires_post_id() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "delete_post"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("post_id"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_engagement_requires_post_id() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "get_engagement"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("post_id"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn unknown_action_returns_error() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "send_message"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(!result.success);
|
||||
assert!(result.error.as_ref().unwrap().contains("Unknown action"));
|
||||
assert!(result.error.as_ref().unwrap().contains("send_message"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_content_strategy_returns_config() {
|
||||
let content = LinkedInContentConfig {
|
||||
rss_feeds: vec!["https://medium.com/feed/tag/rust".into()],
|
||||
github_users: vec!["rareba".into()],
|
||||
github_repos: vec!["zeroclaw-labs/zeroclaw".into()],
|
||||
topics: vec!["cybersecurity".into(), "Rust".into()],
|
||||
persona: "Security engineer and Rust developer".into(),
|
||||
instructions: "Write concise posts with hashtags".into(),
|
||||
};
|
||||
let tool = LinkedInTool::new(
|
||||
test_security(AutonomyLevel::Full, 100),
|
||||
PathBuf::from("/tmp"),
|
||||
"202602".to_string(),
|
||||
content,
|
||||
LinkedInImageConfig::default(),
|
||||
);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "get_content_strategy"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("Security engineer"));
|
||||
assert!(result.output.contains("cybersecurity"));
|
||||
assert!(result.output.contains("medium.com"));
|
||||
assert!(result.output.contains("zeroclaw-labs/zeroclaw"));
|
||||
assert!(result.output.contains("rareba"));
|
||||
assert!(result.output.contains("Write concise posts"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_content_strategy_empty_config_shows_hint() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "get_content_strategy"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.success);
|
||||
assert!(result.output.contains("No content strategy configured"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_content_strategy_not_rate_limited_as_write() {
|
||||
// get_content_strategy is a read action and should work in read-only mode
|
||||
let tool = make_tool(AutonomyLevel::ReadOnly, 100);
|
||||
|
||||
let result = tool
|
||||
.execute(json!({"action": "get_content_strategy"}))
|
||||
.await
|
||||
.unwrap();
|
||||
assert!(result.success);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parameters_schema_includes_get_content_strategy() {
|
||||
let tool = make_tool(AutonomyLevel::Full, 100);
|
||||
let schema = tool.parameters_schema();
|
||||
let actions = schema["properties"]["action"]["enum"].as_array().unwrap();
|
||||
assert!(actions.contains(&json!("get_content_strategy")));
|
||||
}
|
||||
}
|
||||
1726
src/tools/linkedin_client.rs
Normal file
1726
src/tools/linkedin_client.rs
Normal file
File diff suppressed because it is too large
Load Diff
@ -47,6 +47,8 @@ pub mod hardware_memory_read;
|
||||
pub mod http_request;
|
||||
pub mod image_info;
|
||||
pub mod knowledge_tool;
|
||||
pub mod linkedin;
|
||||
pub mod linkedin_client;
|
||||
pub mod mcp_client;
|
||||
pub mod mcp_deferred;
|
||||
pub mod mcp_protocol;
|
||||
@ -108,6 +110,7 @@ pub use hardware_memory_read::HardwareMemoryReadTool;
|
||||
pub use http_request::HttpRequestTool;
|
||||
pub use image_info::ImageInfoTool;
|
||||
pub use knowledge_tool::KnowledgeTool;
|
||||
pub use linkedin::LinkedInTool;
|
||||
pub use mcp_client::McpRegistry;
|
||||
pub use mcp_deferred::{ActivatedToolSet, DeferredMcpToolSet};
|
||||
pub use mcp_tool::McpToolWrapper;
|
||||
@ -461,6 +464,17 @@ pub fn all_tools_with_runtime(
|
||||
tool_arcs.push(Arc::new(ScreenshotTool::new(security.clone())));
|
||||
tool_arcs.push(Arc::new(ImageInfoTool::new(security.clone())));
|
||||
|
||||
// LinkedIn integration (config-gated)
|
||||
if root_config.linkedin.enabled {
|
||||
tool_arcs.push(Arc::new(LinkedInTool::new(
|
||||
security.clone(),
|
||||
workspace_dir.to_path_buf(),
|
||||
root_config.linkedin.api_version.clone(),
|
||||
root_config.linkedin.content.clone(),
|
||||
root_config.linkedin.image.clone(),
|
||||
)));
|
||||
}
|
||||
|
||||
if let Some(key) = composio_key {
|
||||
if !key.is_empty() {
|
||||
tool_arcs.push(Arc::new(ComposioTool::new(
|
||||
|
||||
Loading…
Reference in New Issue
Block a user