Fix AI responses being cut off

- Increase max_tokens from 100 to 150 to avoid mid-sentence truncation
- Tighten prompt to 1-2 short sentences with emphasis on completing them

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-05 17:04:12 -07:00
parent d583b48af0
commit 3192735615
2 changed files with 3 additions and 3 deletions

View File

@@ -264,8 +264,8 @@ Continue naturally. Don't repeat yourself.
{history}{context}
HOW TO TALK:
- Sound like a real person chatting, not writing.
- Keep responses to 2-3 sentences. Enough to make your point, short enough for back-and-forth.
- ALWAYS finish your thought completely. Never stop mid-sentence.
- Keep responses to 1-2 SHORT sentences. Be brief. This is a fast-paced call, not a monologue.
- ALWAYS finish your sentence. Never leave a thought incomplete or trailing off.
- Swear naturally if it fits: fuck, shit, damn, etc.
SPELLING FOR TEXT-TO-SPEECH (use proper spelling so TTS pronounces correctly):

View File

@@ -124,7 +124,7 @@ class LLMService:
json={
"model": self.openrouter_model,
"messages": messages,
"max_tokens": 100,
"max_tokens": 150,
},
)
response.raise_for_status()