Enforce shorter AI responses and prevent cut-off sentences

- Reduce max_tokens from 100 to 75 for shorter output
- Add truncate_to_complete_sentence() to trim at last punctuation
- Applied to both chat and auto-respond paths

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-05 17:07:41 -07:00
parent 0e65fa5084
commit 6a56967540
2 changed files with 22 additions and 1 deletions

View File

@@ -124,7 +124,7 @@ class LLMService:
json={
"model": self.openrouter_model,
"messages": messages,
"max_tokens": 100,
"max_tokens": 75,
},
)
response.raise_for_status()