Files
ai-podcast/CLAUDE.md
tcpsyn 7b7f9b8208 Add BunnyCDN integration, on-air website badge, publish script fixes
- On-air toggle uploads status.json to BunnyCDN + purges cache, website
  polls it every 15s to show live ON AIR / OFF AIR badge
- Publish script downloads Castopod's copy of audio for CDN upload
  (byte-exact match), removes broken slug fallback, syncs all episode
  media to CDN after publishing
- Fix f-string syntax error in publish_episode.py (Python <3.12)
- Enable CORS on BunnyCDN pull zone for json files
- CDN URLs for website OG images, stem recorder bug fixes, LLM token
  budget tweaks, session context in CLAUDE.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 17:34:18 -07:00

3.3 KiB

AI Podcast - Project Instructions

Git Remote (Gitea)

  • Repo: git@gitea-nas:luke/ai-podcast.git
  • Web: http://mmgnas:3000/luke/ai-podcast
  • SSH Host: gitea-nas (configured in ~/.ssh/config)
    • HostName: mmgnas (use mmgnas-10g if wired connection issues)
    • Port: 2222
    • User: git
    • IdentityFile: ~/.ssh/gitea_mmgnas

NAS Access

  • Hostname: mmgnas (wireless) or mmgnas-10g (wired/10G)
  • SSH Port: 8001
  • User: luke
  • Docker path: /share/CACHEDEV1_DATA/.qpkg/container-station/bin/docker

Castopod (Podcast Publishing)

  • URL: https://podcast.macneilmediagroup.com
  • Podcast handle: @LukeAtTheRoost
  • API Auth: Basic auth (admin/podcast2026api)
  • Container: castopod-castopod-1
  • Database: castopod-mariadb-1 (user: castopod, db: castopod)

Running the App

# Start backend
cd /Users/lukemacneil/ai-podcast
python -m uvicorn backend.main:app --reload --host 0.0.0.0 --port 8000

# Or use run.sh
./run.sh

Publishing Episodes

python publish_episode.py ~/Desktop/episode.mp3

Environment Variables

Required in .env:

  • OPENROUTER_API_KEY
  • ELEVENLABS_API_KEY (optional)
  • INWORLD_API_KEY (for Inworld TTS)

Post-Production Pipeline (added Feb 2026)

  • Branch: feature/real-callers — all current work is here, pushed to gitea
  • Stem Recorder (backend/services/stem_recorder.py): Records 5 WAV stems (host, caller, music, sfx, ads) during live shows. Uses lock-free deque architecture — audio callbacks just append to deques, a background writer thread drains to disk. write() for continuous streams (host mic, music, ads), write_sporadic() for burst sources (caller TTS, SFX) with time-aligned silence padding.
  • Audio hooks in backend/services/audio.py: 7 tap points guarded by if self.stem_recorder:. Persistent mic stream (start_stem_mic/stop_stem_mic) runs during recording to capture host voice continuously, not just during push-to-talk.
  • API endpoints: POST /api/recording/start, POST /api/recording/stop (auto-runs postprod in background thread), POST /api/recording/process
  • Frontend: REC button in header with red pulse animation when recording
  • Post-prod script (postprod.py): 6-step pipeline — load stems → gap removal → voice compression (ffmpeg acompressor) → music ducking → stereo mix → EBU R128 loudness normalization to -16 LUFS. All steps skippable via CLI flags.
  • Known issues resolved: Lock-free recorder (old version used threading.Lock in audio callbacks causing crashes), scipy.signal.resample replaced with nearest-neighbor (was producing artifacts on small chunks), sys import bug in auto-postprod, host mic not captured without persistent stream

LLM Settings

  • _pick_response_budget() in main.py controls caller dialog token limits (150-450 tokens). MiniMax respects limits strictly — if responses seem short, check these values.
  • Default max_tokens in llm.py is 300 (for non-caller uses)
  • Grok (x-ai/grok-4-fast) works well for natural dialog; MiniMax tends toward terse responses

Website

  • Domain: lukeattheroost.com (behind Cloudflare)
  • Analytics: Cloudflare Web Analytics (enable in Cloudflare dashboard, no code changes needed)

Episodes Published

  • Episode 6 published 2026-02-08 (podcast6.mp3, ~31 min)