If you want to use Demucs for the MDX challenge, please follow the instructions hereafter
Follow the instructions from the main README in order to setup Demucs using Anaconda. You will need the full setup up for training, including soundstretch.
Download MusDB-HQ to some folder and unzip it.
Train Demucs (you might need to change the batch size depending on the number of GPUs available). It seems 48 channels is enough to get the best performance on MusDB-HQ, and training will faster and less memory demanding.
./run.py --channels=48 --batch_size 64 --musdb=PATH_TO_MUSDB --is_wav [EXTRA_FLAGS]
Once the training is completed, a new model file will be exported in models/
.
You can look at the SDR on the MusDB dataset using python result_table.py
.
If you want to export a model before training is complete, use the following command:
python -m demucs [ALL EXACT TRAINING FLAGS] --save_model
Once this is done, you can partially evaluate a model with
./run.py --test models/NAME_OF_MODEL.th --musdb=PATH_TO_MUSDB --is_wav
Git clone the Music Demixing Challenge - Starter Kit - Demucs Edition.
Inside the starter kit, create a models/
folder and copy over the trained model from the Demucs repo (renaming
it for instance my_model.th
)
Inside the test_demuc.py
file, change the function prediction_setup
: comment the loading
of the pre-trained model, and uncomment the code to load your own model.
Install git-lfs. Then run
git lfs install
git lfs track *.th
git add .gitattributes
git add models/
git add -u .
git commit -m "My Demucs submission"
and then follow the submission instructions.
Best of luck!