- Replace aggressive sentence-count limiting with ensure_complete_thought() which only trims if the LLM was actually cut off mid-sentence - Softer prompt guidance for natural brevity instead of rigid sentence count - max_tokens at 100 as natural length cap Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
6.4 KiB
6.4 KiB