For models with small context windows (e.g. glm-4.5-air ~8K tokens), the system prompt alone can exceed the limit. This adds: - max_system_prompt_chars config option (default 0 = unlimited) - compact_context now also compacts the system prompt: skips the Channel Capabilities section and shows only tool names - Truncation with marker when prompt exceeds the budget Users can set `max_system_prompt_chars = 8000` in [agent] config to cap the system prompt for small-context models. Closes #4124 |
||
|---|---|---|
| .. | ||
| mod.rs | ||
| schema.rs | ||
| traits.rs | ||
| workspace.rs | ||