Compare commits

...

8 Commits

Author SHA1 Message Date
simianastronaut 91b937c2cf fix(gateway): auto-build web frontend so pairing config check reaches the browser (#2879)
The web dashboard's useAuth hook already checks GET /health for
require_pairing before showing the pairing modal, but the compiled
frontend assets in web/dist/ were never rebuilt after that fix landed
(web/dist is gitignored and the build.rs only created the empty
directory).

Update build.rs to automatically run `npm ci && npm run build` when
web/dist/index.html is missing and npm is available. This ensures
cargo build produces a binary with up-to-date dashboard assets that
respect the server's require_pairing configuration.

The build is best-effort: when Node.js is absent (CI containers,
cross-compilation) the Rust build still succeeds with an empty dist.

Also adds tests verifying the /health endpoint correctly exposes
require_pairing as both true and false.

Closes #2879

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 11:03:03 -04:00
SimianAstronaut7 893788f04d fix(security): strip URLs before high-entropy token extraction (#3064) (#3321)
The credential leak detector's check_high_entropy_tokens would
false-positive on URL path segments (e.g. long alphanumeric filenames)
because extract_candidate_tokens included '/' in the token character
set, creating long mixed-alpha-digit tokens that exceeded the Shannon
entropy threshold.

Fix: strip URLs from content before extracting candidate tokens for
entropy analysis. Structural pattern checks (API keys, JWTs, AWS
credentials) use dedicated regexes and are unaffected.

Closes #3064

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:38 +00:00
SimianAstronaut7 0fea62d114 fix(tool): resolve Brave API key lazily with decryption support (#3078) (#3320)
WebSearchTool previously stored the Brave API key once at boot and never
re-read it. This caused three failures: (1) keys set after boot via
web_search_config were ignored, (2) encrypted keys passed as raw enc2:
blobs to the Brave API, and (3) keys absent at startup left the tool
permanently broken.

The fix adds lazy key resolution at execution time. A fast path returns
the boot-time key when it is plaintext and non-empty. When the boot key
is missing or still encrypted, the tool re-reads config.toml, decrypts
the value through SecretStore, and uses the result. This also means
runtime config updates (e.g. `web_search_config set brave_api_key=...`)
are picked up on the next search invocation.

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:35 +00:00
SimianAstronaut7 cca3cf8f84 fix(agent): use char-boundary-safe slicing in scrub_credentials (#3024) (#3319)
Replace byte-level `&val[..4]` slice with `char_indices().nth(4)` to
prevent a panic when the captured credential value contains multi-byte
UTF-8 characters (e.g. Chinese text).  Adds a regression test with
CJK input.

Closes #3024

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:53:32 +00:00
SimianAstronaut7 7170810e98 fix(channel): drop MutexGuard before .await in WhatsApp Web listen (#3315)
Extract `self.bot_handle.lock().take()` into a separate `let` binding
so the parking_lot::MutexGuard is dropped before the `.await`, making
the listen future Send again.

Closes #3312

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:16:40 +00:00
SimianAstronaut7 816fa87d60 fix(channel): restore Lark/Feishu channel compilation (#3318)
Replace the `use_feishu: bool` field on `LarkChannel` with a
`platform: LarkPlatform` enum field, add `mention_only` to the
`new_with_platform` constructor, and introduce `from_lark_config` /
`from_feishu_config` factory methods so the channel factory in
`mod.rs` and the existing tests compile.

Resolves #3302

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 13:16:31 +00:00
Argenis dcffa4d7fb fix(ci): add missing ci-run.yml workflow (#3268)
The ci-run.yml workflow was referenced in docs/contributing/ci-map.md
and branch protection rules but never existed in the repository,
causing push-triggered CI runs to fail immediately with zero jobs
and no logs.

This adds the workflow with all documented jobs: lint, strict delta
lint, test, build (linux + macOS), docs quality, and the CI Required
Gate composite check. Triggers on both push and pull_request to master.

Fixes #2853

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:48:16 +00:00
Argenis e03dc4bfce fix(security): unify cron shell validation across API/CLI/scheduler (#3270)
Centralize cron shell command validation so all entrypoints enforce the
same security policy (allowlist + risk gate + approval) before
persistence and execution.

Changes:
- Add validate_shell_command() and validate_shell_command_with_security()
  as the single validation gate for all cron shell paths
- Add add_shell_job_with_approval() and update_shell_job_with_approval()
  that validate before persisting
- Add add_once_validated() and add_once_at_validated() for one-shot jobs
- Make raw add_shell_job/add_job/add_once/add_once_at pub(crate) to
  prevent unvalidated writes from outside the cron module
- Route gateway API through validated creation path
- Route schedule tool through validated helpers (single validation)
- Route cron_add/cron_update tools through validated helpers
- Unify scheduler execution validation via validate_shell_command_with_security
- CLI update handler uses full validate_command_execution instead of
  just is_command_allowed
- Add focused tests for validation parity across entrypoints
- Standardize error format to "blocked by security policy: {reason}"

Closes #2741
Closes #2742

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 12:48:13 +00:00
17 changed files with 1120 additions and 82 deletions
+166
View File
@@ -0,0 +1,166 @@
name: CI
on:
push:
branches: [master]
pull_request:
branches: [master]
concurrency:
group: ci-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: rustfmt, clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Check formatting
run: cargo fmt --all -- --check
- name: Clippy
run: cargo clippy --all-targets -- -D warnings
lint-strict-delta:
name: Strict Delta Lint
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
components: clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Run strict delta lint gate
run: bash scripts/ci/rust_strict_delta_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
test:
name: Test
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [lint]
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Install mold linker
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Install cargo-nextest
run: curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin
- name: Run tests
run: cargo nextest run --locked
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
build:
name: Build ${{ matrix.target }}
runs-on: ${{ matrix.os }}
timeout-minutes: 40
needs: [lint]
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
- os: macos-14
target: aarch64-apple-darwin
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: dtolnay/rust-toolchain@631a55b12751854ce901bb631d5902ceb48146f7 # stable
with:
toolchain: 1.92.0
targets: ${{ matrix.target }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2
- name: Install mold linker
if: runner.os == 'Linux'
run: |
sudo apt-get update -qq
sudo apt-get install -y mold
- name: Ensure web/dist placeholder exists
run: mkdir -p web/dist && touch web/dist/.gitkeep
- name: Build release
shell: bash
run: cargo build --release --locked --target ${{ matrix.target }}
env:
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER: clang
CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUSTFLAGS: "-C link-arg=-fuse-ld=mold"
docs-quality:
name: Docs Quality
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- uses: actions/setup-node@1d0ff469b7ec7b3cb9d8673fde0c81c44821de2a # v4
with:
node-version: 20
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5
with:
python-version: "3.12"
- name: Run docs quality gate
run: bash scripts/ci/docs_quality_gate.sh
env:
BASE_SHA: ${{ github.event.pull_request.base.sha || github.event.before }}
# Composite status check — branch protection requires this single job.
gate:
name: CI Required Gate
if: always()
needs: [lint, lint-strict-delta, test, build, docs-quality]
runs-on: ubuntu-latest
steps:
- name: Check upstream job results
env:
HAS_FAILURE: ${{ contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled') }}
run: |
if [[ "$HAS_FAILURE" == "true" ]]; then
echo "::error::One or more upstream jobs failed or were cancelled"
exit 1
fi
+107 -3
View File
@@ -1,6 +1,110 @@
use std::path::Path;
use std::process::Command;
fn main() {
let dir = std::path::Path::new("web/dist");
if !dir.exists() {
std::fs::create_dir_all(dir).expect("failed to create web/dist/");
let dist_dir = Path::new("web/dist");
let web_dir = Path::new("web");
// Tell Cargo to re-run this script when web source files change.
println!("cargo:rerun-if-changed=web/src");
println!("cargo:rerun-if-changed=web/index.html");
println!("cargo:rerun-if-changed=web/package.json");
println!("cargo:rerun-if-changed=web/vite.config.ts");
// Attempt to build the web frontend if npm is available and web/dist is
// missing or stale. The build is best-effort: when Node.js is not
// installed (e.g. CI containers, cross-compilation, minimal dev setups)
// we fall back to the existing stub/empty dist directory so the Rust
// build still succeeds.
let needs_build = !dist_dir.join("index.html").exists();
if needs_build && web_dir.join("package.json").exists() {
if let Ok(npm) = which_npm() {
eprintln!("cargo:warning=Building web frontend (web/dist is missing or stale)...");
// npm ci / npm install
let install_status = Command::new(&npm)
.args(["ci", "--ignore-scripts"])
.current_dir(web_dir)
.status();
match install_status {
Ok(s) if s.success() => {}
Ok(s) => {
// Fall back to `npm install` if `npm ci` fails (no lockfile, etc.)
eprintln!("cargo:warning=npm ci exited with {s}, trying npm install...");
let fallback = Command::new(&npm)
.args(["install"])
.current_dir(web_dir)
.status();
if !matches!(fallback, Ok(s) if s.success()) {
eprintln!("cargo:warning=npm install failed — skipping web build");
ensure_dist_dir(dist_dir);
return;
}
}
Err(e) => {
eprintln!("cargo:warning=Could not run npm: {e} — skipping web build");
ensure_dist_dir(dist_dir);
return;
}
}
// npm run build
let build_status = Command::new(&npm)
.args(["run", "build"])
.current_dir(web_dir)
.status();
match build_status {
Ok(s) if s.success() => {
eprintln!("cargo:warning=Web frontend built successfully.");
}
Ok(s) => {
eprintln!(
"cargo:warning=npm run build exited with {s} — web dashboard may be unavailable"
);
}
Err(e) => {
eprintln!(
"cargo:warning=Could not run npm build: {e} — web dashboard may be unavailable"
);
}
}
}
}
ensure_dist_dir(dist_dir);
}
/// Ensure the dist directory exists so `rust-embed` does not fail at compile
/// time even when the web frontend is not built.
fn ensure_dist_dir(dist_dir: &Path) {
if !dist_dir.exists() {
std::fs::create_dir_all(dist_dir).expect("failed to create web/dist/");
}
}
/// Locate the `npm` binary on the system PATH.
fn which_npm() -> Result<String, ()> {
let cmd = if cfg!(target_os = "windows") {
"where"
} else {
"which"
};
Command::new(cmd)
.arg("npm")
.output()
.ok()
.and_then(|output| {
if output.status.success() {
String::from_utf8(output.stdout)
.ok()
.map(|s| s.lines().next().unwrap_or("npm").trim().to_string())
} else {
None
}
})
.ok_or(())
}
+25 -2
View File
@@ -63,8 +63,17 @@ pub(crate) fn scrub_credentials(input: &str) -> String {
.map(|m| m.as_str())
.unwrap_or("");
// Preserve first 4 chars for context, then redact
let prefix = if val.len() > 4 { &val[..4] } else { "" };
// Preserve first 4 chars for context, then redact.
// Use char_indices to find the byte offset of the 4th character
// so we never slice in the middle of a multi-byte UTF-8 sequence.
let prefix = if val.len() > 4 {
val.char_indices()
.nth(4)
.map(|(byte_idx, _)| &val[..byte_idx])
.unwrap_or(val)
} else {
""
};
if full_match.contains(':') {
if full_match.contains('"') {
@@ -5468,6 +5477,20 @@ Let me check the result."#;
);
}
#[test]
fn scrub_credentials_multibyte_chars_no_panic() {
// Regression test for #3024: byte index 4 is not a char boundary
// when the captured value contains multi-byte UTF-8 characters.
// The regex only matches quoted values for non-ASCII content, since
// capture group 4 is restricted to [a-zA-Z0-9_\-\.].
let input = "password=\"\u{4f60}\u{7684}WiFi\u{5bc6}\u{7801}ab\"";
let result = scrub_credentials(input);
assert!(
result.contains("[REDACTED]"),
"multi-byte quoted value should be redacted without panic, got: {result}"
);
}
#[test]
fn scrub_credentials_short_values_not_redacted() {
// Values shorter than 8 chars should not be redacted
+39 -3
View File
@@ -296,8 +296,8 @@ pub struct LarkChannel {
/// Bot open_id resolved at runtime via `/bot/v3/info`.
resolved_bot_open_id: Arc<StdRwLock<Option<String>>>,
mention_only: bool,
/// When true, use Feishu (CN) endpoints; when false, use Lark (international).
use_feishu: bool,
/// Platform variant: Lark (international) or Feishu (CN).
platform: LarkPlatform,
/// How to receive events: WebSocket long-connection or HTTP webhook.
receive_mode: crate::config::schema::LarkReceiveMode,
/// Cached tenant access token
@@ -321,6 +321,7 @@ impl LarkChannel {
verification_token,
port,
allowed_users,
mention_only,
LarkPlatform::Lark,
)
}
@@ -331,6 +332,7 @@ impl LarkChannel {
verification_token: String,
port: Option<u16>,
allowed_users: Vec<String>,
mention_only: bool,
platform: LarkPlatform,
) -> Self {
Self {
@@ -341,7 +343,7 @@ impl LarkChannel {
allowed_users,
resolved_bot_open_id: Arc::new(StdRwLock::new(None)),
mention_only,
use_feishu: true,
platform,
receive_mode: crate::config::schema::LarkReceiveMode::default(),
tenant_token: Arc::new(RwLock::new(None)),
ws_seen_ids: Arc::new(RwLock::new(HashMap::new())),
@@ -363,6 +365,39 @@ impl LarkChannel {
config.port,
config.allowed_users.clone(),
config.mention_only,
platform,
);
ch.receive_mode = config.receive_mode.clone();
ch
}
/// Build from `LarkConfig` forcing `LarkPlatform::Lark`, ignoring the
/// legacy `use_feishu` flag. Used by the channel factory when the config
/// section is explicitly `[channels_config.lark]`.
pub fn from_lark_config(config: &crate::config::schema::LarkConfig) -> Self {
let mut ch = Self::new_with_platform(
config.app_id.clone(),
config.app_secret.clone(),
config.verification_token.clone().unwrap_or_default(),
config.port,
config.allowed_users.clone(),
config.mention_only,
LarkPlatform::Lark,
);
ch.receive_mode = config.receive_mode.clone();
ch
}
/// Build from `FeishuConfig` with `LarkPlatform::Feishu`.
pub fn from_feishu_config(config: &crate::config::schema::FeishuConfig) -> Self {
let mut ch = Self::new_with_platform(
config.app_id.clone(),
config.app_secret.clone(),
config.verification_token.clone().unwrap_or_default(),
config.port,
config.allowed_users.clone(),
false,
LarkPlatform::Feishu,
);
ch.receive_mode = config.receive_mode.clone();
ch
@@ -2078,6 +2113,7 @@ mod tests {
encrypt_key: None,
verification_token: Some("vtoken789".into()),
allowed_users: vec!["*".into()],
mention_only: false,
use_feishu: true,
receive_mode: LarkReceiveMode::Webhook,
port: Some(9898),
+2 -1
View File
@@ -554,7 +554,8 @@ impl Channel for WhatsAppWebChannel {
};
*self.client.lock() = None;
if let Some(handle) = self.bot_handle.lock().take() {
let handle = self.bot_handle.lock().take();
if let Some(handle) = handle {
handle.abort();
// Await the aborted task so background I/O finishes before
// we delete session files.
+291 -18
View File
@@ -1,6 +1,6 @@
use crate::config::Config;
use crate::security::SecurityPolicy;
use anyhow::{bail, Result};
use anyhow::{anyhow, bail, Result};
mod schedule;
mod store;
@@ -14,11 +14,106 @@ pub use schedule::{
};
#[allow(unused_imports)]
pub use store::{
add_agent_job, add_job, add_shell_job, due_jobs, get_job, list_jobs, list_runs,
record_last_run, record_run, remove_job, reschedule_after_run, update_job,
add_agent_job, due_jobs, get_job, list_jobs, list_runs, record_last_run, record_run,
remove_job, reschedule_after_run, update_job,
};
pub use types::{CronJob, CronJobPatch, CronRun, DeliveryConfig, JobType, Schedule, SessionTarget};
/// Validate a shell command against the full security policy (allowlist + risk gate).
///
/// Returns `Ok(())` if the command passes all checks, or an error describing
/// why it was blocked.
pub fn validate_shell_command(config: &Config, command: &str, approved: bool) -> Result<()> {
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
validate_shell_command_with_security(&security, command, approved)
}
/// Validate a shell command using an existing `SecurityPolicy` instance.
///
/// Preferred when the caller already holds a `SecurityPolicy` (e.g. scheduler).
pub(crate) fn validate_shell_command_with_security(
security: &SecurityPolicy,
command: &str,
approved: bool,
) -> Result<()> {
security
.validate_command_execution(command, approved)
.map(|_| ())
.map_err(|reason| anyhow!("blocked by security policy: {reason}"))
}
/// Create a validated shell job, enforcing security policy before persistence.
///
/// All entrypoints that create shell cron jobs should route through this
/// function to guarantee consistent policy enforcement.
pub fn add_shell_job_with_approval(
config: &Config,
name: Option<String>,
schedule: Schedule,
command: &str,
approved: bool,
) -> Result<CronJob> {
validate_shell_command(config, command, approved)?;
store::add_shell_job(config, name, schedule, command)
}
/// Update a shell job's command with security validation.
///
/// Validates the new command (if changed) before persisting.
pub fn update_shell_job_with_approval(
config: &Config,
job_id: &str,
patch: CronJobPatch,
approved: bool,
) -> Result<CronJob> {
if let Some(command) = patch.command.as_deref() {
validate_shell_command(config, command, approved)?;
}
update_job(config, job_id, patch)
}
/// Create a one-shot validated shell job from a delay string (e.g. "30m").
pub fn add_once_validated(
config: &Config,
delay: &str,
command: &str,
approved: bool,
) -> Result<CronJob> {
let duration = parse_delay(delay)?;
let at = chrono::Utc::now() + duration;
add_once_at_validated(config, at, command, approved)
}
/// Create a one-shot validated shell job at an absolute timestamp.
pub fn add_once_at_validated(
config: &Config,
at: chrono::DateTime<chrono::Utc>,
command: &str,
approved: bool,
) -> Result<CronJob> {
let schedule = Schedule::At { at };
add_shell_job_with_approval(config, None, schedule, command, approved)
}
// Convenience wrappers for CLI paths (default approved=false).
pub(crate) fn add_shell_job(
config: &Config,
name: Option<String>,
schedule: Schedule,
command: &str,
) -> Result<CronJob> {
add_shell_job_with_approval(config, name, schedule, command, false)
}
pub(crate) fn add_job(config: &Config, expression: &str, command: &str) -> Result<CronJob> {
let schedule = Schedule::Cron {
expr: expression.to_string(),
tz: None,
};
add_shell_job(config, None, schedule, command)
}
#[allow(clippy::needless_pass_by_value)]
pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<()> {
match command {
@@ -128,13 +223,6 @@ pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<(
None
};
if let Some(ref cmd) = command {
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
if !security.is_command_allowed(cmd) {
bail!("Command blocked by security policy: {cmd}");
}
}
let patch = CronJobPatch {
schedule,
command,
@@ -142,7 +230,7 @@ pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<(
..CronJobPatch::default()
};
let job = update_job(config, &id, patch)?;
let job = update_shell_job_with_approval(config, &id, patch, false)?;
println!("\u{2705} Updated cron job {}", job.id);
println!(" Expr: {}", job.expression);
println!(" Next: {}", job.next_run.to_rfc3339());
@@ -163,19 +251,16 @@ pub fn handle_command(command: crate::CronCommands, config: &Config) -> Result<(
}
}
pub fn add_once(config: &Config, delay: &str, command: &str) -> Result<CronJob> {
let duration = parse_delay(delay)?;
let at = chrono::Utc::now() + duration;
add_once_at(config, at, command)
pub(crate) fn add_once(config: &Config, delay: &str, command: &str) -> Result<CronJob> {
add_once_validated(config, delay, command, false)
}
pub fn add_once_at(
pub(crate) fn add_once_at(
config: &Config,
at: chrono::DateTime<chrono::Utc>,
command: &str,
) -> Result<CronJob> {
let schedule = Schedule::At { at };
add_shell_job(config, None, schedule, command)
add_once_at_validated(config, at, command, false)
}
pub fn pause_job(config: &Config, id: &str) -> Result<CronJob> {
@@ -413,4 +498,192 @@ mod tests {
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
assert!(security.is_command_allowed("echo safe"));
}
#[test]
fn add_shell_job_requires_explicit_approval_for_medium_risk() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into(), "touch".into()];
let denied = add_shell_job(
&config,
None,
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"touch cron-medium-risk",
);
assert!(denied.is_err());
assert!(denied
.unwrap_err()
.to_string()
.contains("explicit approval"));
let approved = add_shell_job_with_approval(
&config,
None,
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"touch cron-medium-risk",
true,
);
assert!(approved.is_ok(), "{approved:?}");
}
#[test]
fn update_requires_explicit_approval_for_medium_risk() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into(), "touch".into()];
let job = make_job(&config, "*/5 * * * *", None, "echo original");
let denied = update_shell_job_with_approval(
&config,
&job.id,
CronJobPatch {
command: Some("touch cron-medium-risk-update".into()),
..CronJobPatch::default()
},
false,
);
assert!(denied.is_err());
assert!(denied
.unwrap_err()
.to_string()
.contains("explicit approval"));
let approved = update_shell_job_with_approval(
&config,
&job.id,
CronJobPatch {
command: Some("touch cron-medium-risk-update".into()),
..CronJobPatch::default()
},
true,
)
.unwrap();
assert_eq!(approved.command, "touch cron-medium-risk-update");
}
#[test]
fn cli_update_requires_explicit_approval_for_medium_risk() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into(), "touch".into()];
let job = make_job(&config, "*/5 * * * *", None, "echo original");
let result = run_update(
&config,
&job.id,
None,
None,
Some("touch cron-cli-medium-risk"),
None,
);
assert!(result.is_err());
assert!(result
.unwrap_err()
.to_string()
.contains("explicit approval"));
}
#[test]
fn add_once_validated_creates_one_shot_job() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let job = add_once_validated(&config, "1h", "echo one-shot", false).unwrap();
assert_eq!(job.command, "echo one-shot");
assert!(matches!(job.schedule, Schedule::At { .. }));
}
#[test]
fn add_once_validated_blocks_disallowed_command() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into()];
config.autonomy.level = crate::security::AutonomyLevel::Supervised;
let result = add_once_validated(&config, "1h", "curl https://example.com", false);
assert!(result.is_err());
assert!(result
.unwrap_err()
.to_string()
.contains("blocked by security policy"));
}
#[test]
fn add_once_at_validated_creates_one_shot_job() {
let tmp = TempDir::new().unwrap();
let config = test_config(&tmp);
let at = chrono::Utc::now() + chrono::Duration::hours(1);
let job = add_once_at_validated(&config, at, "echo at-shot", false).unwrap();
assert_eq!(job.command, "echo at-shot");
assert!(matches!(job.schedule, Schedule::At { .. }));
}
#[test]
fn add_once_at_validated_blocks_medium_risk_without_approval() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into(), "touch".into()];
let at = chrono::Utc::now() + chrono::Duration::hours(1);
let denied = add_once_at_validated(&config, at, "touch at-medium", false);
assert!(denied.is_err());
assert!(denied
.unwrap_err()
.to_string()
.contains("explicit approval"));
let approved = add_once_at_validated(&config, at, "touch at-medium", true);
assert!(approved.is_ok(), "{approved:?}");
}
#[test]
fn gateway_api_path_validates_shell_command() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into()];
config.autonomy.level = crate::security::AutonomyLevel::Supervised;
// Simulate gateway API path: add_shell_job_with_approval(approved=false)
let result = add_shell_job_with_approval(
&config,
None,
Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"curl https://example.com",
false,
);
assert!(result.is_err());
assert!(result
.unwrap_err()
.to_string()
.contains("blocked by security policy"));
}
#[test]
fn scheduler_path_validates_shell_command() {
let tmp = TempDir::new().unwrap();
let mut config = test_config(&tmp);
config.autonomy.allowed_commands = vec!["echo".into()];
config.autonomy.level = crate::security::AutonomyLevel::Supervised;
let security = SecurityPolicy::from_config(&config.autonomy, &config.workspace_dir);
// Simulate scheduler validation path
let result =
validate_shell_command_with_security(&security, "curl https://example.com", false);
assert!(result.is_err());
assert!(result
.unwrap_err()
.to_string()
.contains("blocked by security policy"));
}
}
+11 -10
View File
@@ -406,14 +406,15 @@ async fn run_job_command_with_timeout(
);
}
if !security.is_command_allowed(&job.command) {
return (
false,
format!(
"blocked by security policy: command not allowed: {}",
job.command
),
);
// Unified command validation: allowlist + risk + path checks in one call.
// Jobs created via the validated helpers were already checked at creation
// time, but we re-validate at execution time to catch policy changes and
// manually-edited job stores.
let approved = false; // scheduler runs are never pre-approved
if let Err(error) =
crate::cron::validate_shell_command_with_security(security, &job.command, approved)
{
return (false, error.to_string());
}
if let Some(path) = security.forbidden_path_argument(&job.command) {
@@ -565,7 +566,7 @@ mod tests {
let (success, output) = run_job_command(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("command not allowed"));
assert!(output.to_lowercase().contains("not allowed"));
}
#[tokio::test]
@@ -639,7 +640,7 @@ mod tests {
let (success, output) = run_job_command(&config, &security, &job).await;
assert!(!success);
assert!(output.contains("blocked by security policy"));
assert!(output.contains("command not allowed"));
assert!(output.to_lowercase().contains("not allowed"));
}
#[tokio::test]
+7 -1
View File
@@ -257,7 +257,13 @@ pub async fn handle_api_cron_add(
tz: None,
};
match crate::cron::add_shell_job(&config, body.name, schedule, &body.command) {
match crate::cron::add_shell_job_with_approval(
&config,
body.name,
schedule,
&body.command,
false,
) {
Ok(job) => Json(serde_json::json!({
"status": "ok",
"job": {
+74
View File
@@ -2940,4 +2940,78 @@ mod tests {
let err = require_localhost(&peer).unwrap_err();
assert_eq!(err.0, StatusCode::FORBIDDEN);
}
#[tokio::test]
async fn health_endpoint_exposes_require_pairing_false() {
let state = AppState {
config: Arc::new(Mutex::new(Config::default())),
provider: Arc::new(MockProvider::default()),
model: "test-model".into(),
temperature: 0.0,
mem: Arc::new(MockMemory),
auto_save: false,
webhook_secret_hash: None,
pairing: Arc::new(PairingGuard::new(false, &[])),
trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
};
let response = handle_health(State(state)).await.into_response();
assert_eq!(response.status(), StatusCode::OK);
let body = response.into_body().collect().await.unwrap().to_bytes();
let parsed: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(parsed["status"], "ok");
assert_eq!(parsed["require_pairing"], false);
}
#[tokio::test]
async fn health_endpoint_exposes_require_pairing_true() {
let state = AppState {
config: Arc::new(Mutex::new(Config::default())),
provider: Arc::new(MockProvider::default()),
model: "test-model".into(),
temperature: 0.0,
mem: Arc::new(MockMemory),
auto_save: false,
webhook_secret_hash: None,
pairing: Arc::new(PairingGuard::new(true, &[])),
trust_forwarded_headers: false,
rate_limiter: Arc::new(GatewayRateLimiter::new(100, 100, 100)),
idempotency_store: Arc::new(IdempotencyStore::new(Duration::from_secs(300), 1000)),
whatsapp: None,
whatsapp_app_secret: None,
linq: None,
linq_signing_secret: None,
nextcloud_talk: None,
nextcloud_talk_webhook_secret: None,
wati: None,
observer: Arc::new(crate::observability::NoopObserver),
tools_registry: Arc::new(Vec::new()),
cost_tracker: None,
event_tx: tokio::sync::broadcast::channel(16).0,
shutdown_tx: tokio::sync::watch::channel(false).0,
};
let response = handle_health(State(state)).await.into_response();
assert_eq!(response.status(), StatusCode::OK);
let body = response.into_body().collect().await.unwrap().to_bytes();
let parsed: serde_json::Value = serde_json::from_slice(&body).unwrap();
assert_eq!(parsed["status"], "ok");
assert_eq!(parsed["require_pairing"], true);
}
}
+154
View File
@@ -7,8 +7,12 @@
//! Contributed from RustyClaw (MIT licensed).
use regex::Regex;
use std::collections::HashMap;
use std::sync::OnceLock;
/// Minimum token length considered for high-entropy detection.
const ENTROPY_TOKEN_MIN_LEN: usize = 24;
/// Result of leak detection.
#[derive(Debug, Clone)]
pub enum LeakResult {
@@ -61,6 +65,7 @@ impl LeakDetector {
self.check_private_keys(content, &mut patterns, &mut redacted);
self.check_jwt_tokens(content, &mut patterns, &mut redacted);
self.check_database_urls(content, &mut patterns, &mut redacted);
self.check_high_entropy_tokens(content, &mut patterns, &mut redacted);
if patterns.is_empty() {
LeakResult::Clean
@@ -288,6 +293,72 @@ impl LeakDetector {
}
}
}
/// Check for high-entropy tokens that may be leaked credentials.
///
/// Extracts candidate tokens from content (after stripping URLs to avoid
/// false-positives on path segments) and flags any that exceed the Shannon
/// entropy threshold derived from the detector's sensitivity.
fn check_high_entropy_tokens(
&self,
content: &str,
patterns: &mut Vec<String>,
redacted: &mut String,
) {
// Entropy threshold scales with sensitivity: at 0.7 this is ~4.37.
let entropy_threshold = 3.5 + self.sensitivity * 1.25;
// Strip URLs before extracting tokens so that path segments like
// "org/documents/2024-report-a1b2c3d4e5f6g7h8i9j0" are not mistaken
// for high-entropy credentials.
static URL_PATTERN: OnceLock<Regex> = OnceLock::new();
let url_re = URL_PATTERN.get_or_init(|| Regex::new(r"https?://\S+").unwrap());
let content_without_urls = url_re.replace_all(content, "");
let tokens = extract_candidate_tokens(&content_without_urls);
for token in tokens {
if token.len() >= ENTROPY_TOKEN_MIN_LEN {
let entropy = shannon_entropy(token);
if entropy >= entropy_threshold && has_mixed_alpha_digit(token) {
patterns.push("High-entropy token".to_string());
*redacted = redacted.replace(token, "[REDACTED_HIGH_ENTROPY_TOKEN]");
}
}
}
}
}
/// Extract candidate tokens by splitting on characters outside the
/// alphanumeric + common credential character set.
fn extract_candidate_tokens(content: &str) -> Vec<&str> {
content
.split(|c: char| !c.is_ascii_alphanumeric() && c != '_' && c != '-' && c != '+' && c != '/')
.filter(|s| !s.is_empty())
.collect()
}
/// Compute Shannon entropy (bits per character) for the given string.
fn shannon_entropy(s: &str) -> f64 {
let len = s.len() as f64;
if len == 0.0 {
return 0.0;
}
let mut freq: HashMap<u8, usize> = HashMap::new();
for &b in s.as_bytes() {
*freq.entry(b).or_insert(0) += 1;
}
freq.values().fold(0.0, |acc, &count| {
let p = count as f64 / len;
acc - p * p.log2()
})
}
/// Check whether a token contains both alphabetic and digit characters.
fn has_mixed_alpha_digit(s: &str) -> bool {
let has_alpha = s.bytes().any(|b| b.is_ascii_alphabetic());
let has_digit = s.bytes().any(|b| b.is_ascii_digit());
has_alpha && has_digit
}
#[cfg(test)]
@@ -381,4 +452,87 @@ MIIEowIBAAKCAQEA0ZPr5JeyVDonXsKhfq...
// Low sensitivity should not flag generic secrets
assert!(matches!(result, LeakResult::Clean));
}
#[test]
fn url_path_segments_not_flagged() {
let detector = LeakDetector::new();
// URL with a long mixed-alphanumeric path segment that would previously
// false-positive as a high-entropy token.
let content =
"See https://example.org/documents/2024-report-a1b2c3d4e5f6g7h8i9j0.pdf for details";
let result = detector.scan(content);
assert!(
matches!(result, LeakResult::Clean),
"URL path segments should not trigger high-entropy detection"
);
}
#[test]
fn url_with_long_path_not_redacted() {
let detector = LeakDetector::new();
let content = "Reference: https://gov.example.com/publications/research/2024-annual-fiscal-policy-review-9a8b7c6d5e4f3g2h1i0j.html";
let result = detector.scan(content);
assert!(
matches!(result, LeakResult::Clean),
"Long URL paths should not be redacted"
);
}
#[test]
fn detects_high_entropy_token_outside_url() {
let detector = LeakDetector::new();
// A standalone high-entropy token (not in a URL) should still be detected.
let content = "Found credential: aB3xK9mW2pQ7vL4nR8sT1yU6hD0jF5cG";
let result = detector.scan(content);
match result {
LeakResult::Detected { patterns, redacted } => {
assert!(patterns.iter().any(|p| p.contains("High-entropy")));
assert!(redacted.contains("[REDACTED_HIGH_ENTROPY_TOKEN]"));
}
LeakResult::Clean => panic!("Should detect high-entropy token"),
}
}
#[test]
fn low_sensitivity_raises_entropy_threshold() {
let detector = LeakDetector::with_sensitivity(0.3);
// At low sensitivity the entropy threshold is higher (3.5 + 0.3*1.25 = 3.875).
// A repetitive mixed token has low entropy and should not be flagged.
let content = "token found: ab12ab12ab12ab12ab12ab12ab12ab12";
let result = detector.scan(content);
assert!(
matches!(result, LeakResult::Clean),
"Low-entropy repetitive tokens should not be flagged"
);
}
#[test]
fn extract_candidate_tokens_splits_correctly() {
let tokens = extract_candidate_tokens("foo.bar:baz qux-quux key=val");
assert!(tokens.contains(&"foo"));
assert!(tokens.contains(&"bar"));
assert!(tokens.contains(&"baz"));
assert!(tokens.contains(&"qux-quux"));
// '=' is a delimiter, not part of tokens
assert!(tokens.contains(&"key"));
assert!(tokens.contains(&"val"));
}
#[test]
fn shannon_entropy_empty_string() {
assert_eq!(shannon_entropy(""), 0.0);
}
#[test]
fn shannon_entropy_single_char() {
// All same characters: entropy = 0
assert_eq!(shannon_entropy("aaaa"), 0.0);
}
#[test]
fn shannon_entropy_two_equal_chars() {
// "ab" repeated: entropy = 1.0 bit
let e = shannon_entropy("abab");
assert!((e - 1.0).abs() < 0.001);
}
}
+1 -1
View File
@@ -184,7 +184,7 @@ impl Tool for CronAddTool {
return Ok(blocked);
}
cron::add_shell_job(&self.config, name, schedule, command)
cron::add_shell_job_with_approval(&self.config, name, schedule, command, approved)
}
JobType::Agent => {
let prompt = match args.get("prompt").and_then(serde_json::Value::as_str) {
+2 -2
View File
@@ -166,10 +166,10 @@ mod tests {
config_path: tmp.path().join("config.toml"),
..Config::default()
};
config.autonomy.level = AutonomyLevel::ReadOnly;
std::fs::create_dir_all(&config.workspace_dir).unwrap();
let job = cron::add_job(&config, "*/5 * * * *", "echo ok").unwrap();
config.autonomy.level = AutonomyLevel::ReadOnly;
let cfg = Arc::new(config);
let job = cron::add_job(&cfg, "*/5 * * * *", "echo ok").unwrap();
let tool = CronRemoveTool::new(cfg.clone(), test_security(&cfg));
let result = tool.execute(json!({"job_id": job.id})).await.unwrap();
+15 -9
View File
@@ -211,10 +211,10 @@ mod tests {
config_path: tmp.path().join("config.toml"),
..Config::default()
};
config.autonomy.level = AutonomyLevel::ReadOnly;
std::fs::create_dir_all(&config.workspace_dir).unwrap();
let job = cron::add_job(&config, "*/5 * * * *", "echo run-now").unwrap();
config.autonomy.level = AutonomyLevel::ReadOnly;
let cfg = Arc::new(config);
let job = cron::add_job(&cfg, "*/5 * * * *", "echo run-now").unwrap();
let tool = CronRunTool::new(cfg.clone(), test_security(&cfg));
let result = tool.execute(json!({ "job_id": job.id })).await.unwrap();
@@ -234,21 +234,27 @@ mod tests {
config.autonomy.allowed_commands = vec!["touch".into()];
std::fs::create_dir_all(&config.workspace_dir).unwrap();
let cfg = Arc::new(config);
let job = cron::add_job(&cfg, "*/5 * * * *", "touch cron-run-approval").unwrap();
// Create with explicit approval so the job persists for the run test.
let job = cron::add_shell_job_with_approval(
&cfg,
None,
cron::Schedule::Cron {
expr: "*/5 * * * *".into(),
tz: None,
},
"touch cron-run-approval",
true,
)
.unwrap();
let tool = CronRunTool::new(cfg.clone(), test_security(&cfg));
// Without approval, the tool-level policy check blocks medium-risk commands.
let denied = tool.execute(json!({ "job_id": job.id })).await.unwrap();
assert!(!denied.success);
assert!(denied
.error
.unwrap_or_default()
.contains("explicit approval"));
let approved = tool
.execute(json!({ "job_id": job.id, "approved": true }))
.await
.unwrap();
assert!(approved.success, "{:?}", approved.error);
}
#[tokio::test]
+3 -13
View File
@@ -119,21 +119,11 @@ impl Tool for CronUpdateTool {
.and_then(serde_json::Value::as_bool)
.unwrap_or(false);
if let Some(command) = &patch.command {
if let Err(reason) = self.security.validate_command_execution(command, approved) {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(reason),
});
}
}
if let Some(blocked) = self.enforce_mutation_allowed("cron_update") {
return Ok(blocked);
}
match cron::update_job(&self.config, job_id, patch) {
match cron::update_shell_job_with_approval(&self.config, job_id, patch, approved) {
Ok(job) => Ok(ToolResult {
success: true,
output: serde_json::to_string_pretty(&job)?,
@@ -228,10 +218,10 @@ mod tests {
config_path: tmp.path().join("config.toml"),
..Config::default()
};
config.autonomy.level = AutonomyLevel::ReadOnly;
std::fs::create_dir_all(&config.workspace_dir).unwrap();
let job = cron::add_job(&config, "*/5 * * * *", "echo ok").unwrap();
config.autonomy.level = AutonomyLevel::ReadOnly;
let cfg = Arc::new(config);
let job = cron::add_job(&cfg, "*/5 * * * *", "echo ok").unwrap();
let tool = CronUpdateTool::new(cfg.clone(), test_security(&cfg));
let result = tool
+3 -1
View File
@@ -289,11 +289,13 @@ pub fn all_tools_with_runtime(
// Web search tool (enabled by default for GLM and other models)
if root_config.web_search.enabled {
tool_arcs.push(Arc::new(WebSearchTool::new(
tool_arcs.push(Arc::new(WebSearchTool::new_with_config(
root_config.web_search.provider.clone(),
root_config.web_search.brave_api_key.clone(),
root_config.web_search.max_results,
root_config.web_search.timeout_secs,
root_config.config_path.clone(),
root_config.secrets.encrypt,
)));
}
+42 -11
View File
@@ -253,14 +253,6 @@ impl ScheduleTool {
.filter(|value| !value.trim().is_empty())
.ok_or_else(|| anyhow::anyhow!("Missing or empty 'command' parameter"))?;
if let Err(reason) = self.security.validate_command_execution(command, approved) {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(reason),
});
}
let expression = args.get("expression").and_then(|value| value.as_str());
let delay = args.get("delay").and_then(|value| value.as_str());
let run_at = args.get("run_at").and_then(|value| value.as_str());
@@ -309,8 +301,28 @@ impl ScheduleTool {
}
}
// All job creation routes through validated cron helpers, which enforce
// the full security policy (allowlist + risk gate) before persistence.
if let Some(value) = expression {
let job = cron::add_job(&self.config, value, command)?;
let job = match cron::add_shell_job_with_approval(
&self.config,
None,
cron::Schedule::Cron {
expr: value.to_string(),
tz: None,
},
command,
approved,
) {
Ok(job) => job,
Err(error) => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error.to_string()),
});
}
};
return Ok(ToolResult {
success: true,
output: format!(
@@ -325,7 +337,16 @@ impl ScheduleTool {
}
if let Some(value) = delay {
let job = cron::add_once(&self.config, value, command)?;
let job = match cron::add_once_validated(&self.config, value, command, approved) {
Ok(job) => job,
Err(error) => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error.to_string()),
});
}
};
return Ok(ToolResult {
success: true,
output: format!(
@@ -343,7 +364,17 @@ impl ScheduleTool {
.map_err(|error| anyhow::anyhow!("Invalid run_at timestamp: {error}"))?
.with_timezone(&Utc);
let job = cron::add_once_at(&self.config, run_at_parsed, command)?;
let job = match cron::add_once_at_validated(&self.config, run_at_parsed, command, approved)
{
Ok(job) => job,
Err(error) => {
return Ok(ToolResult {
success: false,
output: String::new(),
error: Some(error.to_string()),
});
}
};
Ok(ToolResult {
success: true,
output: format!(
+178 -7
View File
@@ -2,15 +2,26 @@ use super::traits::{Tool, ToolResult};
use async_trait::async_trait;
use regex::Regex;
use serde_json::json;
use std::path::{Path, PathBuf};
use std::time::Duration;
/// Web search tool for searching the internet.
/// Supports multiple providers: DuckDuckGo (free), Brave (requires API key).
///
/// The Brave API key is resolved lazily at execution time: if the boot-time key
/// is missing or still encrypted, the tool re-reads `config.toml`, decrypts the
/// `[web_search] brave_api_key` field, and uses the result. This ensures that
/// keys set or rotated after boot, and encrypted keys, are correctly picked up.
pub struct WebSearchTool {
provider: String,
brave_api_key: Option<String>,
/// Boot-time key snapshot (may be `None` if not yet configured at startup).
boot_brave_api_key: Option<String>,
max_results: usize,
timeout_secs: u64,
/// Path to `config.toml` for lazy re-read of keys at execution time.
config_path: PathBuf,
/// Whether secret encryption is enabled (needed to create a `SecretStore`).
secrets_encrypt: bool,
}
impl WebSearchTool {
@@ -22,9 +33,85 @@ impl WebSearchTool {
) -> Self {
Self {
provider: provider.trim().to_lowercase(),
brave_api_key,
boot_brave_api_key: brave_api_key,
max_results: max_results.clamp(1, 10),
timeout_secs: timeout_secs.max(1),
config_path: PathBuf::new(),
secrets_encrypt: false,
}
}
/// Create a `WebSearchTool` with config-reload and decryption support.
///
/// `config_path` is the path to `config.toml` so the tool can re-read the
/// Brave API key at execution time. `secrets_encrypt` controls whether the
/// key is decrypted via `SecretStore`.
pub fn new_with_config(
provider: String,
brave_api_key: Option<String>,
max_results: usize,
timeout_secs: u64,
config_path: PathBuf,
secrets_encrypt: bool,
) -> Self {
Self {
provider: provider.trim().to_lowercase(),
boot_brave_api_key: brave_api_key,
max_results: max_results.clamp(1, 10),
timeout_secs: timeout_secs.max(1),
config_path,
secrets_encrypt,
}
}
/// Resolve the Brave API key, preferring the boot-time value but falling
/// back to a fresh config read + decryption when the boot-time value is
/// absent.
fn resolve_brave_api_key(&self) -> anyhow::Result<String> {
// Fast path: boot-time key is present and usable (not an encrypted blob).
if let Some(ref key) = self.boot_brave_api_key {
if !key.is_empty() && !crate::security::SecretStore::is_encrypted(key) {
return Ok(key.clone());
}
}
// Slow path: re-read config.toml to pick up keys set/rotated after boot.
self.reload_brave_api_key()
}
/// Re-read `config.toml` and decrypt `[web_search] brave_api_key`.
fn reload_brave_api_key(&self) -> anyhow::Result<String> {
let contents = std::fs::read_to_string(&self.config_path).map_err(|e| {
anyhow::anyhow!(
"Failed to read config file {} for Brave API key: {e}",
self.config_path.display()
)
})?;
let config: crate::config::Config = toml::from_str(&contents).map_err(|e| {
anyhow::anyhow!(
"Failed to parse config file {} for Brave API key: {e}",
self.config_path.display()
)
})?;
let raw_key = config
.web_search
.brave_api_key
.filter(|k| !k.is_empty())
.ok_or_else(|| anyhow::anyhow!("Brave API key not configured"))?;
// Decrypt if necessary.
if crate::security::SecretStore::is_encrypted(&raw_key) {
let zeroclaw_dir = self.config_path.parent().unwrap_or_else(|| Path::new("."));
let store = crate::security::SecretStore::new(zeroclaw_dir, self.secrets_encrypt);
let plaintext = store.decrypt(&raw_key)?;
if plaintext.is_empty() {
anyhow::bail!("Brave API key not configured (decrypted value is empty)");
}
Ok(plaintext)
} else {
Ok(raw_key)
}
}
@@ -99,10 +186,7 @@ impl WebSearchTool {
}
async fn search_brave(&self, query: &str) -> anyhow::Result<String> {
let api_key = self
.brave_api_key
.as_ref()
.ok_or_else(|| anyhow::anyhow!("Brave API key not configured"))?;
let api_key = self.resolve_brave_api_key()?;
let encoded_query = urlencoding::encode(query);
let search_url = format!(
@@ -117,7 +201,7 @@ impl WebSearchTool {
let response = client
.get(&search_url)
.header("Accept", "application/json")
.header("X-Subscription-Token", api_key)
.header("X-Subscription-Token", &api_key)
.send()
.await?;
@@ -328,4 +412,91 @@ mod tests {
assert!(result.is_err());
assert!(result.unwrap_err().to_string().contains("API key"));
}
#[test]
fn test_resolve_brave_api_key_uses_boot_key() {
let tool = WebSearchTool::new(
"brave".to_string(),
Some("sk-plaintext-key".to_string()),
5,
15,
);
let key = tool.resolve_brave_api_key().unwrap();
assert_eq!(key, "sk-plaintext-key");
}
#[test]
fn test_resolve_brave_api_key_reloads_from_config() {
let tmp = tempfile::TempDir::new().unwrap();
let config_path = tmp.path().join("config.toml");
std::fs::write(
&config_path,
"[web_search]\nbrave_api_key = \"fresh-key-from-disk\"\n",
)
.unwrap();
// No boot key -- forces reload from config
let tool =
WebSearchTool::new_with_config("brave".to_string(), None, 5, 15, config_path, false);
let key = tool.resolve_brave_api_key().unwrap();
assert_eq!(key, "fresh-key-from-disk");
}
#[test]
fn test_resolve_brave_api_key_decrypts_encrypted_key() {
let tmp = tempfile::TempDir::new().unwrap();
let store = crate::security::SecretStore::new(tmp.path(), true);
let encrypted = store.encrypt("brave-secret-key").unwrap();
let config_path = tmp.path().join("config.toml");
std::fs::write(
&config_path,
format!("[web_search]\nbrave_api_key = \"{}\"\n", encrypted),
)
.unwrap();
// Boot key is the encrypted blob -- should trigger reload + decrypt
let tool = WebSearchTool::new_with_config(
"brave".to_string(),
Some(encrypted),
5,
15,
config_path,
true,
);
let key = tool.resolve_brave_api_key().unwrap();
assert_eq!(key, "brave-secret-key");
}
#[test]
fn test_resolve_brave_api_key_picks_up_runtime_update() {
let tmp = tempfile::TempDir::new().unwrap();
let config_path = tmp.path().join("config.toml");
// Start with no key in config
std::fs::write(&config_path, "[web_search]\n").unwrap();
let tool = WebSearchTool::new_with_config(
"brave".to_string(),
None,
5,
15,
config_path.clone(),
false,
);
// Key not configured yet -- should fail
assert!(tool.resolve_brave_api_key().is_err());
// Simulate runtime config update (e.g. via web_search_config set)
std::fs::write(
&config_path,
"[web_search]\nbrave_api_key = \"runtime-updated-key\"\n",
)
.unwrap();
// Now should succeed with the updated key
let key = tool.resolve_brave_api_key().unwrap();
assert_eq!(key, "runtime-updated-key");
}
}