aa3899b1fc4f967565c53ef03f3af27a28bfd480
- Primary model gets 15s, then auto-falls back through gemini-flash, gpt-4o-mini, llama-3.1-8b (10s each) - Always returns a response — canned in-character line as last resort - Reuse httpx client instead of creating new one per request - Remove asyncio.timeout wrappers that were killing requests before the LLM service could try fallbacks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Description
AI Radio Show - web-based podcast production with multiple TTS providers
Languages
Python
76.7%
HTML
12.5%
JavaScript
5.2%
CSS
3.4%
Lua
1.9%
Other
0.3%