A fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. Piper is used in a variety of projects.
echo 'Welcome to the world of speech synthesis!' | \
./piper --model en-us-blizzard_lessac-medium.onnx --output_file welcome.wav
Listen to voice samples and check out a video tutorial by Thorsten Müller
Voices are trained with VITS and exported to the onnxruntime.
Our goal is to support Home Assistant and the Year of Voice.
Download voices for the supported languages:
- Catalan (ca)
- Danish (da)
- German (de)
- British English (en-gb)
- U.S. English (en-us)
- Spanish (es)
- Finnish (fi)
- French (fr)
- Greek (el-gr)
- Icelandic (is)
- Italian (it)
- Kazakh (kk)
- Nepali (ne)
- Dutch (nl)
- Norwegian (no)
- Polish (pl)
- Brazilian Portuguese (pt-br)
- Russian (ru)
- Swedish (sv-se)
- Ukrainian (uk)
- Vietnamese (vi)
- Chinese (zh-cn)
Download a release:
If you want to build from source, see the Makefile and C++ source.
You must download and extract piper-phonemize to lib/Linux-$(uname -m)/piper_phonemize
before building.
For example, lib/Linux-x86_64/piper_phonemize/lib/libpiper_phonemize.so
should exist for AMD/Intel machines (as well as everything else from libpiper_phonemize-amd64.tar.gz
).
- Download a voice and extract the
.onnx
and.onnx.json
files - Run the
piper
binary with text on standard input,--model /path/to/your-voice.onnx
, and--output_file output.wav
For example:
echo 'Welcome to the world of speech synthesis!' | \
./piper --model en-us-lessac-medium.onnx --output_file welcome.wav
For multi-speaker models, use --speaker <number>
to change speakers (default: 0).
See piper --help
for more options.
Piper has been used in the following projects/papers:
- Home Assistant
- Rhasspy 3
- NVDA - NonVisual Desktop Access
- Image Captioning for the Visually Impaired and Blind: A Recipe for Low-Resource Languages
- Open Voice Operating System
- JetsonGPT
See src/python
Pretrained checkpoints are available on Hugging Face
Start by installing system dependencies:
sudo apt-get install python3-dev
Then create a virtual environment:
cd piper/src/python
python3 -m venv .venv
source .venv/bin/activate
pip3 install --upgrade pip
pip3 install --upgrade wheel setuptools
pip3 install -r requirements.txt
Run the build_monotonic_align.sh
script in the src/python
directory to build the extension.
Ensure you have espeak-ng installed (sudo apt-get install espeak-ng
).
Next, preprocess your dataset:
python3 -m piper_train.preprocess \
--language en-us \
--input-dir /path/to/ljspeech/ \
--output-dir /path/to/training_dir/ \
--dataset-format ljspeech \
--sample-rate 22050
Datasets must either be in the LJSpeech format (with only id/text columns or id/speaker/text) or from Mimic Recording Studio (--dataset-format mycroft
).
Finally, you can train:
python3 -m piper_train \
--dataset-dir /path/to/training_dir/ \
--accelerator 'gpu' \
--devices 1 \
--batch-size 32 \
--validation-split 0.05 \
--num-test-examples 5 \
--max_epochs 10000 \
--precision 32
Training uses PyTorch Lightning. Run tensorboard --logdir /path/to/training_dir/lightning_logs
to monitor. See python3 -m piper_train --help
for many additional options.
It is highly recommended to train with the following Dockerfile
:
FROM nvcr.io/nvidia/pytorch:22.03-py3
RUN pip3 install \
'pytorch-lightning'
ENV NUMBA_CACHE_DIR=.numba_cache
See the various infer_*
and export_*
scripts in src/python/piper_train to test and export your voice from the checkpoint in lightning_logs
. The dataset.jsonl
file in your training directory can be used with python3 -m piper_train.infer
for quick testing:
head -n5 /path/to/training_dir/dataset.jsonl | \
python3 -m piper_train.infer \
--checkpoint lightning_logs/path/to/checkpoint.ckpt \
--sample-rate 22050 \
--output-dir wavs
See src/python_run
Run scripts/setup.sh
to create a virtual environment and install the requirements. Then run:
echo 'Welcome to the world of speech synthesis!' | scripts/piper \
--model /path/to/voice.onnx \
--output_file welcome.wav
If you'd like to use a GPU, install the onnxruntime-gpu
package:
.venv/bin/pip3 install onnxruntime-gpu
and then run scripts/piper
with the --cuda
argument. You will need to have a functioning CUDA environment, such as what's available in NVIDIA's PyTorch containers.