Last updated: 2026-01-30
Text-to-speech Shiny app for the Cornball AI ecosystem.
Features
- Multi-backend TTS: Chatterbox, Qwen3-TTS, OpenAI, ElevenLabs, fal.ai
- Voice selection: 9 built-in Qwen3 voices, OpenAI voices, ElevenLabs library
- Voice cloning: Upload reference audio for Chatterbox/Qwen3 backends
- History: Persistent storage with audio playback in
~/.cornfab/
Installation
1# Install from GitHub
2remotes::install_github("cornball-ai/cornfab")
Usage
1library(cornfab)
2run_app() # Runs on port 7803
Backends
| Backend | Type | Port | Features |
|---|---|---|---|
| Chatterbox | Container | 7810 | Voice cloning, exaggeration control |
| Qwen3-TTS | Container | 7811 | 9 voices, voice design, 10 languages |
| OpenAI | API | - | 6 voices, tts-1/tts-1-hd models |
| ElevenLabs | API | - | Large voice library, multilingual |
| fal.ai | API | - | F5-TTS, Dia, Orpheus models |
Container Setup
Chatterbox and Qwen3-TTS run as local Docker containers. You must:
- Download models before running containers
- Start containers manually - cornfab does not auto-start containers
Model Storage
Models are stored in the HuggingFace cache:
1~/.cache/huggingface/hub/
Mount this directory when running containers:
1-v ~/.cache/huggingface:/root/.cache/huggingface
Downloading Models
Chatterbox
Model: ResembleAI/chatterbox (~2GB)
Option 1: R with hfhub (recommended)
1# install.packages("hfhub")
2hfhub::hub_snapshot("ResembleAI/chatterbox")
Option 2: Python with huggingface_hub
1pip install huggingface_hub
2python -c "from huggingface_hub import snapshot_download; snapshot_download('ResembleAI/chatterbox')"
Option 3: curl
1mkdir -p ~/.cache/huggingface/hub/models--ResembleAI--chatterbox/snapshots/main
2cd ~/.cache/huggingface/hub/models--ResembleAI--chatterbox/snapshots/main
3curl -LO https://huggingface.co/ResembleAI/chatterbox/resolve/main/chatterbox.safetensors
4curl -LO https://huggingface.co/ResembleAI/chatterbox/resolve/main/s3gen.safetensors
5curl -LO https://huggingface.co/ResembleAI/chatterbox/resolve/main/t3_cfg.safetensors
6curl -LO https://huggingface.co/ResembleAI/chatterbox/resolve/main/ve.safetensors
Qwen3-TTS
Three models are needed for full functionality:
Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice- Built-in speakers (~7GB)Qwen/Qwen3-TTS-12Hz-1.7B-Base- Voice cloning (~7GB)Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign- Voice design (~7GB)
Option 1: R with hfhub (recommended)
1# Download all three models
2hfhub::hub_snapshot("Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice")
3hfhub::hub_snapshot("Qwen/Qwen3-TTS-12Hz-1.7B-Base")
4hfhub::hub_snapshot("Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign")
Option 2: Python with huggingface_hub
1python -c "from huggingface_hub import snapshot_download; \
2 snapshot_download('Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice'); \
3 snapshot_download('Qwen/Qwen3-TTS-12Hz-1.7B-Base'); \
4 snapshot_download('Qwen/Qwen3-TTS-12Hz-1.7B-VoiceDesign')"
Option 3: Git LFS
1cd ~/.cache/huggingface/hub
2git lfs install
3git clone https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice
Running Containers
Note: Containers default to LOCAL_FILES_ONLY=true and will fail if models aren’t pre-downloaded. Set LOCAL_FILES_ONLY=false to enable auto-download (not recommended for production).
Chatterbox (port 7810)
1# Build (if not using ghcr.io)
2cd ~/chatterbox-tts-api
3docker build -f docker/Dockerfile -t chatterbox-tts-api .
4
5# Run
6docker run -d --gpus all --network=host --name chatterbox \
7 -v ~/.cache/huggingface:/root/.cache/huggingface \
8 -e PORT=7810 \
9 chatterbox-tts-api
Qwen3-TTS (port 7811)
1# Build (if not using ghcr.io)
2cd ~/qwen3-tts-api
3docker build -f Dockerfile.blackwell -t qwen3-tts-api:blackwell .
4
5# Run (Blackwell GPUs - RTX 50xx)
6docker run -d --gpus all --network=host --name qwen3-tts-api \
7 -v ~/.cache/huggingface:/cache \
8 -e PORT=7811 \
9 -e USE_FLASH_ATTENTION=false \
10 qwen3-tts-api:blackwell
11
12# Run (older GPUs - Ampere, Ada Lovelace)
13docker build -t qwen3-tts-api . # Use default Dockerfile
14docker run -d --gpus all --network=host --name qwen3-tts-api \
15 -v ~/.cache/huggingface:/cache \
16 -e PORT=7811 \
17 qwen3-tts-api
API Backends
For OpenAI, ElevenLabs, and fal.ai, set environment variables:
1export OPENAI_API_KEY="sk-..."
2export ELEVENLABS_API_KEY="..."
3export FAL_KEY="..."
Or configure in the app’s API Settings panel.
Development
1# Clone
2git clone https://github.com/cornball-ai/cornfab
3cd cornfab
4
5# Build and run
6r -e 'tinyrox::document(); tinypkgr::install()'
7r -e 'library(cornfab); run_app()'
Related
Reference
See Function Reference for complete API documentation.
Functions
cornfab Reference
Function reference for cornfab
