AudioCraft is a PyTorch library for deep learning research on audio generation. AudioCraft contains inference and training code for two state-of-the-art AI generative models producing high-quality audio: AudioGen and MusicGen.
- Add audiogen model
- Download all models to models directory
- Automatically save all output files to output directory
- Add button to open output folder in the explorer
- Add batch count option to control how many predictions to make
- Move the submit and interrupt button to the right
- Add run.bat for Windows users which will set the correct ffmpeg path
There is a video to show how to use it
AudioCraft requires Python 3.9, PyTorch 2.0.0. To install AudioCraft, you can run the following:
# Clone the repository
git clone https://github.com/lifeisboringsoprogramming/audiocraft.git
# Create virtual environment
cd audiocraft
python -m venv venv
# Activate virtual environment
# On windows
.\venv\Scripts\activate.bat
# On Linux
source .\venv\bin\activate
# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Then
pip install -e . # or if you cloned the repo locally (mandatory if you want to train).
Download pre-built ffmpeg
https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-win64-gpl-shared.zip
Put it under .\venv
folder inside audiocraft
We also recommend having ffmpeg
installed, either through your system or Anaconda:
sudo apt-get install ffmpeg
# Or if you are using Anaconda or Miniconda
conda install 'ffmpeg<5' -c conda-forge
At the moment, AudioCraft contains the training code and inference code for:
- MusicGen: A state-of-the-art controllable text-to-music model.
- AudioGen: A state-of-the-art text-to-sound model.
- EnCodec: A state-of-the-art high fidelity neural audio codec.
- Multi Band Diffusion: An EnCodec compatible decoder using diffusion.
AudioCraft contains PyTorch components for deep learning research in audio and training pipelines for the developed models. For a general introduction of AudioCraft design principles and instructions to develop your own training pipeline, refer to the AudioCraft training documentation.
For reproducing existing work and using the developed training pipelines, refer to the instructions for each specific model that provides pointers to configuration, example grids and model/task-specific information and FAQ.
We provide some API documentation for AudioCraft.
Yes! We provide the training code for EnCodec, MusicGen and Multi Band Diffusion.
Hugging Face stored the model in a specific location, which can be overriden by setting the AUDIOCRAFT_CACHE_DIR
environment variable.
- The code in this repository is released under the MIT license as found in the LICENSE file.
- The models weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.
For the general framework of AudioCraft, please cite the following.
@article{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
journal={arXiv preprint arXiv:2306.05284},
}
When referring to a specific model, please cite as mentioned in the model specific README, e.g ./docs/MUSICGEN.md, ./docs/AUDIOGEN.md, etc.
Please subscribe to my YouTube channel, thank you very much.
☕️ Please consider to support me in Patreon 🍻