zeroclaw/src/providers
Christian Pojoni d91e54a5d0
fix(tool+channel): revert invalid model set via model_routing_config (#3497)
When the LLM hallucinates an invalid model ID through the
model_routing_config tool's set_default action, the invalid model gets
persisted to config.toml. The channel hot-reload then picks it up and
every subsequent message fails with a non-retryable 404, permanently
killing the connection with no user recovery path.

Fix with two layers of defense:

1. Tool probe-and-rollback: after saving the new model, send a minimal
   chat request to verify the model is accessible. If the API returns a
   non-retryable error (404, auth failure, etc.), automatically restore
   the previous config and return a failure notice to the LLM.

2. Channel safety net: in maybe_apply_runtime_config_update, reject
   config reloads when warmup fails with a non-retryable error instead
   of applying the broken config anyway.

Co-authored-by: Christian Pojoni <christian.pojoni@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 15:17:20 +03:00
..
anthropic.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
azure_openai.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
bedrock.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
compatible.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
copilot.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
gemini.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
glm.rs feat(proxy): add scoped proxy configuration and docs runbooks 2026-02-18 22:10:42 +08:00
mod.rs feat(providers): close AiHubMix, SiliconFlow, and Codex OAuth provider gaps (#3730) 2026-03-24 15:17:16 +03:00
ollama.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
openai_codex.rs fix(memory): filter autosave noise and scope recall/store by session (#3695) 2026-03-24 15:17:18 +03:00
openai.rs fix(providers): adjust temperature for OpenAI reasoning models (#2936) 2026-03-24 15:17:19 +03:00
openrouter.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00
reliable.rs fix(tool+channel): revert invalid model set via model_routing_config (#3497) 2026-03-24 15:17:20 +03:00
router.rs feat: add multimodal image marker support with Ollama vision 2026-02-19 21:25:21 +08:00
telnyx.rs test(telnyx): silence unused provider binding in constructor test 2026-02-21 17:38:27 +08:00
traits.rs feat(cache): wire two-tier response cache, multi-provider token tracking, and cache analytics 2026-03-24 15:17:14 +03:00