Fix AI responses being cut off

- Increase max_tokens from 100 to 150 to avoid mid-sentence truncation
- Tighten prompt to 1-2 short sentences with emphasis on completing them

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-05 17:04:12 -07:00
parent d583b48af0
commit 3192735615
2 changed files with 3 additions and 3 deletions

View File

@@ -264,8 +264,8 @@ Continue naturally. Don't repeat yourself.
{history}{context} {history}{context}
HOW TO TALK: HOW TO TALK:
- Sound like a real person chatting, not writing. - Sound like a real person chatting, not writing.
- Keep responses to 2-3 sentences. Enough to make your point, short enough for back-and-forth. - Keep responses to 1-2 SHORT sentences. Be brief. This is a fast-paced call, not a monologue.
- ALWAYS finish your thought completely. Never stop mid-sentence. - ALWAYS finish your sentence. Never leave a thought incomplete or trailing off.
- Swear naturally if it fits: fuck, shit, damn, etc. - Swear naturally if it fits: fuck, shit, damn, etc.
SPELLING FOR TEXT-TO-SPEECH (use proper spelling so TTS pronounces correctly): SPELLING FOR TEXT-TO-SPEECH (use proper spelling so TTS pronounces correctly):

View File

@@ -124,7 +124,7 @@ class LLMService:
json={ json={
"model": self.openrouter_model, "model": self.openrouter_model,
"messages": messages, "messages": messages,
"max_tokens": 100, "max_tokens": 150,
}, },
) )
response.raise_for_status() response.raise_for_status()