This is the official repository 👑 for the Emilia dataset and the source code for Emilia-Pipe speech data preprocessing pipeline.
- 2024/07/08: Our preprint paper is now available! 🔥🔥🔥
- 2024/07/03: We welcome everyone to check our homepage for our brief introduction for Emilia dataset and our demos!
- 2024/07/01: We release of Emilia and Emilia-Pipe! We welcome everyone to explore it! 🎉🎉🎉
🎤 Emilia is a comprehensive, multilingual dataset with the following features:
- containing over 101k hours of speech data;
- covering six different languages: English (En), Chinese (Zh), German (De), French (Fr), Japanese (Ja), and Korean (Ko);
- containing diverse speech data with various speaking styles;
Detailed description for the dataset could be found in our paper.
🛠️ Emilia-Pipe is the first open-source preprocessing pipeline designed to transform raw, in-the-wild speech data into high-quality training data with annotations for speech generation. This pipeline can process one hour of raw audio into model-ready data in just a few minutes, requiring only the raw speech data.
To use the Emilia dataset, you can download the raw audio files from our provided source URL list on HuggingFace and use our open-source Emilia-Pipe preprocessing pipeline to preprocess the raw data and rebuild the dataset.
Please note that Emilia doesn't own the copyright of the audios; the copyright remains with the original owners of the video or audio. Additionally, users can easily use Emilia-Pipe to preprocess their own raw speech data for custom needs.
By open-sourcing the Emilia-Pipe code, we aim to enable the speech community to collaborate on large-scale speech generation research.
This following README will introduce the installation and usage guide of the Emilia-Pipe.
The Emilia-Pipe includes the following major steps:
- Standardization:Audio normalization
- Source Separation: Long audio -> Long audio without BGM
- Speaker Diarization: Get medium-length single-speaker speech data
- Fine-grained Segmentation by VAD: Get 3-30s single-speaker speech segments
- ASR: Get transcriptions of the speech segments
- Filtering: Obtain the final processed dataset
-
Install Python and CUDA.
-
Run the following commands to install the required packages:
conda create -y -n AudioPipeline python=3.9 conda activate AudioPipeline bash env.sh
-
Download the model files from the third-party repositories.
- Manually download the checkpoints of UVR-MDX-NET-Inst_HQ_3 (UVR-MDX-NET-Inst_3.onnx) and DNSMOS P.835 (sig_bak_ovr.onnx), then save their path for the next step configuration (i.e. #2 and #3 TODO).
- Creat the access token to pyannote/speaker-diarization-3.1 following the guide, then save it for the next step configuration (i.e. #4 TODO).
- Make sure you have stable connection to GitHub and HuggingFace. The checkpoints of Silero and Whisperx-medium will be downloaded automatically on the pipeline's first run.
Change the config.json file according to the following TODOs.
{
"language": {
"multilingual": true,
"supported": [
"zh",
"en",
"fr",
"ja",
"ko",
"de"
]
},
"entrypoint": {
// TODO: Fill in the input_folder_path.
"input_folder_path": "examples", // #1: Data input folder for processing
"SAMPLE_RATE": 24000
},
"separate": {
"step1": {
// TODO: Fill in the source separation model's path.
"model_path": "/path/to/model/separate_model/UVR-MDX-NET-Inst_HQ_3.onnx", // #2: Model path
"denoise": true,
"margin": 44100,
"chunks": 15,
"n_fft": 6144,
"dim_t": 8,
"dim_f": 3072
}
},
"mos_model": {
// TODO: Fill in the DNSMOS prediction model's path.
"primary_model_path": "/path/to/model/mos_model/DNSMOS/sig_bak_ovr.onnx" // #3: Model path
},
// TODO: Fill in your huggingface access token for pynannote.
"huggingface_token": "<HUGGINGFACE_ACCESS_TOKEN>" // #4: Huggingface access token for pyannote
}
- Change the
input_folder_path
inconfig.json
to the folder path where the downloaded audio files are stored (i.e. #1 TODO). - Run the following command to process the audio files:
conda activate AudioPipeline
export CUDA_VISIBLE_DEVICES=0 # Setting the GPU to run the pipeline, separate by comma
python main.py
- Processed audio will be saved into
input_folder_path
_processed folder.
The processed audio (default 24k sample rate) files will be saved into input_folder_path
_processed folder. The results for a single audio will be saved in a same folder with its original name and include the following information:
- MP3 file:
<original_name>_<idx>.mp3
whereidx
is corresponding to the index in the JSON-encoded array. - JSON file:
<original_name>.json
[
{
"text": "So, don't worry about that. But, like for instance, like yesterday was very hard for me to say, you know what, I should go to bed.", // Transcription
"start": 67.18, // Start timestamp, in second unit
"end": 74.41, // End timestamp, in second unit
"language": "en", // Language
"dnsmos": 3.44 // DNSMOS P.835 score
}
]
We acknowledge the wonderful work by these excellent developers!
- Source Separation: UVR-MDX-NET-Inst_HQ_3
- VAD: snakers4/silero-vad
- Speaker Diarization: pyannote/speaker-diarization-3.1
- ASR: m-bain/whisperX, using faster-whisper and CTranslate2 backend.
- DNSMOS Prediction: DNSMOS P.835
If you use the Emilia dataset or the Emilia-Pipe pipeline, please cite the following papers:
@article{emilia,
title={Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation},
author={He, Haorui and Shang, Zengqiang and Wang, Chaoren and Li, Xuyuan and Gu, Yicheng and Hua, Hua and Liu, Liwei and Yang, Chen and Li, Jiaqi and Shi, Peiyang and Wang, Yuancheng and Chen, Kai and Zhang, Pengyuan and Wu, Zhizheng},
journal={arXiv},
volume={abs/2407.05361}
year={2024}
}
@article{amphion,
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
author={Zhang, Xueyao and Xue, Liumeng and Gu, Yicheng and Wang, Yuancheng and He, Haorui and Wang, Chaoren and Chen, Xi and Fang, Zihao and Chen, Haopeng and Zhang, Junan and Tang, Tze Ying and Zou, Lexiao and Wang, Mingxuan and Han, Jun and Chen, Kai and Li, Haizhou and Wu, Zhizheng},
journal={arXiv},
volume={abs/2312.09911}
year={2024},
}