* fix: change compact_context default to true Local LLMs with limited context windows immediately run out of context when compact_context defaults to false. The system prompt alone can consume 25K+ tokens, exceeding even 55K context windows with history. Setting compact_context=true by default limits system prompt injection to 6000 chars and RAG results to 2 chunks, making the agent usable with smaller models out of the box. Fixes #3987 * docs: update compact_context default to true in config reference Update all locale variants (en, zh-CN, vi) to reflect the new default. * test: update tests to expect compact_context default of true Update assertions in schema.rs unit tests and config_persistence.rs component tests to match the new default value. |
||
|---|---|---|
| .. | ||
| config_persistence.rs | ||
| config_schema.rs | ||
| dockerignore_test.rs | ||
| gateway.rs | ||
| mod.rs | ||
| otel_dependency_feature_regression.rs | ||
| provider_resolution.rs | ||
| provider_schema.rs | ||
| reply_target_field_regression.rs | ||
| security.rs | ||
| whatsapp_webhook_security.rs | ||