Skip to content

The official implementation of EmoSphere++

Notifications You must be signed in to change notification settings

ayutaz/EmoSpherepp

 
 

Repository files navigation

EmoSphere++: Emotion-Controllable Zero-Shot Text-to-Speech via Emotion-Adaptive Spherical Vector
The official implementation of EmoSphere++

Deok-Hyeon Cho, Hyung-Seok Oh, Seung-Bin Kim, Seong-Whan Lee

Department of Artificial Intelligence, Korea University, Seoul, Korea.

Abstract

Emotional text-to-speech (TTS) technology has achieved significant progress in recent years; however, challenges remain owing to the inherent complexity of emotions and limitations of the available emotional speech datasets and models. Previous studies typically relied on limited emotional speech datasets or required extensive manual annotations, restricting their ability to generalize across different speakers and emotional styles. In this paper, we present EmoSphere++, an emotion-controllable zero-shot TTS model that can control emotional style and intensity to resemble natural human speech. We introduce a novel emotion-adaptive spherical vector that models emotional style and intensity without human annotation. Moreover, we propose a multi-level style encoder that can ensure effective generalization for both seen and unseen speakers. We also introduce additional loss functions to enhance the emotion transfer performance for zero-shot scenarios. We employ a conditional flow matching-based decoder to achieve high-quality and expressive emotional TTS in a few sampling steps. Experimental results demonstrate the effectiveness of the proposed framework.

Training Procedure

Environments

pip install -r requirements.txt
sudo apt install -y sox libsox-fmt-mp3
bash mfa_usr/install_mfa.sh # install force alignment tools

Vocoder

The BigVGAN 16k checkpoint will be released at a later date. In the meantime, please train using the official BigVGAN implementation or use the official HiFi-GAN checkpoint.


1. Preprocess data

a) VAD Analysis

  • Steps for emotion-specific centroid extraction with VAD analysis
sh Analysis.sh

b) Preprocessing

  • Steps for embedding extraction and binary dataset creation
sh preprocessing.sh

2. Training TTS module and Inference

sh train_run.sh

3. Pretrained checkpoints

Acknowledgements

Our codes are based on the following repos:

About

The official implementation of EmoSphere++

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Shell 0.9%