zeroclaw/docs/reference/api
simianastronaut d0c33b3db4 feat(config): add configurable pacing controls for slow/local LLM workloads (#2963)
Add a new `[pacing]` config section with four opt-in parameters that
let users tune timeout and loop-detection behavior for local LLMs
(Ollama, llama.cpp, vLLM) without disabling safety features entirely:

- `step_timeout_secs`: per-step LLM inference timeout independent of
  the overall message budget, catching hung model responses early.
- `loop_detection_min_elapsed_secs`: time-gated loop detection that
  only activates after a configurable grace period, avoiding false
  positives on long-running browser/research workflows.
- `loop_ignore_tools`: per-tool loop-detection exclusions so tools
  like `browser_screenshot` that structurally resemble loops are not
  counted toward identical-output detection.
- `message_timeout_scale_max`: overrides the hardcoded 4x ceiling in
  the channel message timeout scaling formula.

All parameters are strictly optional with no effect when absent,
preserving full backwards compatibility.

Closes #2963

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-21 08:23:36 -04:00
..
channels-reference.md fix(docs): remove stale onboarding flags after CLI changes (#3516) 2026-03-14 17:54:14 -04:00
config-reference.md feat(config): add configurable pacing controls for slow/local LLM workloads (#2963) 2026-03-21 08:23:36 -04:00
config-reference.vi.md docs: update all internal links to match topic-based directory layout 2026-03-09 23:09:09 -04:00
providers-reference.md feat(providers): add Avian as OpenAI-compatible provider (#4076) 2026-03-20 15:31:59 -04:00