Official Pytorch Implementation
without any fancy design, just a quality injection, and enjoy your beautiful music
checkpoint is provisionally provided, we will update more and debug(potential) soon
https://pan.baidu.com/s/1pkLnQhbNeFjKRadXUy_7Iw?pwd=v9dd
This repository provides an implementation of QA-MDT, integrating state-of-the-art models for music generation. The code and methods are based on the following repositories:
Python 3.10
qamdt.yaml
Before training, you need to download extra ckpts needed in ./audioldm_train/config/mos_as_token/qa_mdt.yaml and offset_pretrained_checkpoints.json Noted that: All above checkpoints is consist in
sh run.sh
sh infer/infer.sh
# you may change the infer.sh for witch quality level you want to infer
# defaultly, it should be set to 5 which represent highest quality
# Additionly, it may be useful to change the prompt with text prefix "high quality",
# which match the training process and may further improve performance