Implementation of a non-autoregressive Transformer based neural network for Text-to-Speech (TTS).
This repo is based on the following papers:
- Neural Speech Synthesis with Transformer Network
- FastSpeech: Fast, Robust and Controllable Text to Speech
Spectrograms produced with LJSpeech and standard data configuration from this repo are compatible with WaveRNN.
Being non-autoregressive, this Transformer model is:
- Robust: No repeats and failed attention modes for challenging sentences.
- Fast: With no autoregression, predictions take a fraction of the time.
- Controllable: It is possible to control the speed of the generated utterance.
These samples' spectrograms are converted using the pre-trained WaveRNN vocoder.
Try it out on Colab:
Version | Colab Link |
---|---|
Forward | |
Autoregressive |
Make sure you have:
- Python >= 3.6
Install espeak as phonemizer backend (for macOS use brew):
sudo apt-get install espeak
Then install the rest with pip:
pip install -r requirements.txt
Read the individual scripts for more command line arguments.
You can directly use LJSpeech to create the training dataset.
- If training LJSpeech, or if unsure, simply use
config/standard
- EDIT PATHS: in
data_config.yaml
edit the paths to point at your dataset and log folders
Prepare a dataset in the following format:
|- dataset_folder/
| |- metadata.csv
| |- wav/
| |- file1.wav
| |- ...
where metadata.csv
has the following format:
wav_file_name|transcription
python create_dataset.py --config config/standard
python train_autoregressive.py --config config/standard
First use the autoregressive model to create the durations dataset
python extract_durations.py --config config/standard --binary --fix_jumps --fill_mode_next
this will add an additional folder to the dataset folder containing the new datasets for validation and training of the forward model.
If the rhythm of the trained model is off, play around with the flags of this script to fix the durations.
python train_forward.py --config /path/to/config_folder/
- Training and model settings can be configured in
model_config.yaml
- To resume training simply use the same configuration files AND
--session_name
flag, if any - To restart training, delete the weights and/or the logs from the logs folder with the training flag
--reset_dir
(both) or--reset_logs
,--reset_weights
We log some information that can be visualized with TensorBoard:
tensorboard --logdir /logs/directory/
Predict with either the Forward or Autoregressive model
from utils.config_manager import ConfigManager
from utils.audio import reconstruct_waveform
config_loader = ConfigManager('/path/to/config/', model_kind='forward')
model = config_loader.load_model()
out = model.predict('Please, say something.')
# Convert spectrogram to wav (with griffin lim)
wav = reconstruct_waveform(out['mel'].numpy().T, config=config_loader.config)
Model URL | Commit |
---|---|
ljspeech_forward_model | 4945e775b |
ljspeech_autoregressive_model_v2 | 4945e775b |
ljspeech_autoregressive_model_v1 | 2f3a1b5 |
- Francesco Cardinale, github: cfrancesco
WaveRNN: we took the data processing from here and use their vocoder to produce the samples.
Erogol and the Mozilla TTS team for the lively exchange on the topic.
See LICENSE for details.