Fix AI responses being cut off

- Increase max_tokens from 100 to 150 to avoid mid-sentence truncation
- Tighten prompt to 1-2 short sentences with emphasis on completing them

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-05 17:04:12 -07:00
parent d583b48af0
commit 3192735615
2 changed files with 3 additions and 3 deletions

View File

@@ -124,7 +124,7 @@ class LLMService:
json={
"model": self.openrouter_model,
"messages": messages,
"max_tokens": 100,
"max_tokens": 150,
},
)
response.raise_for_status()