Skip to content

AudioLDM: Generate speech, sound effects, music and beyond, with text.

License

Notifications You must be signed in to change notification settings

Fictiverse/AudioLDM-Windows

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text-to-Audio Generation

arXiv githubio Hugging Face Spaces Hugging Face Hub Replicate

Generate speech, sound effects, music and beyond.

Windows installation

cd C:\Path\Of\AudioLDM
conda env create --name audioldm --file=environment.yml
conda activate audioldm
pip install -r requirements.txt
python scripts\text2sound.py

Gradio is not used because it have ffmpeg issue.


Web APP

  1. Prepare running environment
conda create -n audioldm python=3.8; conda activate audioldm
pip3 install audioldm==0.0.6
git clone https://github.com/haoheliu/AudioLDM; cd AudioLDM
  1. Start the web application (powered by Gradio)
python3 app.py
  1. A link will be printed out. Click the link to open the browser and play.

Commandline Usage

  1. Prepare running environment
# Optional
conda create -n audioldm python=3.8; conda activate audioldm
# Install AudioLDM
pip3 install audioldm==0.0.6
  1. text-to-audio generation
# Test run
audioldm -t "A hammer is hitting a wooden surface"

For more options on guidance scale, batchsize, seed, etc, please run

audioldm -h

For the evaluation of audio generative model, please refer to audioldm_eval.

Web Demo

Integrated into Hugging Face Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

TODO

  • Update the checkpoint with more training steps.
  • Add AudioCaps finetuned AudioLDM-S model
  • Build pip installable package for commandline use
  • Build Gradio web application
  • Add text-guided style transfer
  • Add audio super-resolution
  • Add audio inpainting

Cite this work

If you found this tool useful, please consider citing

@article{liu2023audioldm,
  title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
  author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
  journal={arXiv preprint arXiv:2301.12503},
  year={2023}
}

Hardware requirement

  • GPU with 8GB of dedicated VRAM
  • A system with a 64-bit operating system (Windows 7, 8.1 or 10, Ubuntu 16.04 or later, or macOS 10.13 or later) 16GB or more of system RAM

Reference

Part of the code is borrowed from the following repos. We would like to thank the authors of these repos for their contribution.

https://github.com/LAION-AI/CLAP

https://github.com/CompVis/stable-diffusion

https://github.com/v-iashin/SpecVQGAN

https://github.com/toshas/torch-fidelity

We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.

About

AudioLDM: Generate speech, sound effects, music and beyond, with text.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%