Python package to tokenize MIDI music files, presented at the ISMIR 2021 LBDs.
MidiTok can tokenize MIDI files, i.e. convert them into sequences of tokens ready to be fed to models such as Transformer, for any generation, transcription or MIR task. MidiTok features most known MIDI tokenizations (e.g. REMI, Compound Word...), and is built around the idea that they all share common parameters and methods. It supports Byte Pair Encoding (BPE) and data augmentation.
Documentation: miditok.readthedocs.com
pip install miditok
MidiTok uses Symusic to read and write MIDI files, and BPE is backed by Hugging Face 🤗tokenizers for super-fast encoding.
Below is a complete yet concise example of how you can use MidiTok. And here is a simple notebook example showing how to use Hugging Face models to generate music, with MidiTok taking care of tokenizing MIDIs.
from miditok import REMI, TokenizerConfig
from miditok.pytorch_data import DatasetTok, DataCollator
from pathlib import Path
from symusic import Score
# Creating a multitrack tokenizer configuration, read the doc to explore other parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)
# Loads a midi, converts to tokens, and back to a MIDI
midi = Score("path/to/your_midi.mid")
tokens = tokenizer(midi) # calling the tokenizer will automatically detect MIDIs, paths and tokens
converted_back_midi = tokenizer(tokens) # PyTorch / Tensorflow / Numpy tensors supported
# Trains the tokenizer with BPE, and save it to load it back later
midi_paths = list(Path("path", "to", "midis").glob("**/*.mid"))
tokenizer.learn_bpe(vocab_size=30000, files_paths=midi_paths)
tokenizer.save_params(Path("path", "to", "save", "tokenizer.json"))
# And pushing it to the Hugging Face hub (you can download it back with .from_pretrained)
tokenizer.push_to_hub("username/model-name", private=True, token="your_hf_token")
# Creates a Dataset and a collator to be used with a PyTorch DataLoader to train a model
dataset = DatasetTok(
files_paths=midi_paths,
min_seq_len=100,
max_seq_len=1024,
tokenizer=tokenizer,
)
collator = DataCollator(
tokenizer["PAD_None"], tokenizer["BOS_None"], tokenizer["EOS_None"]
)
from torch.utils.data import DataLoader
data_loader = DataLoader(dataset=dataset, collate_fn=collator)
for batch in data_loader:
print("Train your model on this batch...")
MidiTok implements the tokenizations: (links to original papers)
You can find short presentations in the documentation.
Contributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. You can read the contribution guide for details.
no_duration_drums
option, discarding duration tokens for drum notes;- Extend unimplemented additional tokens to all compatible tokenizations;
- Control Change messages;
- Speeding up the MIDI preprocess + global/track events parsing with Rust or C++ binding.
If you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. ❤️
[MidiTok paper] [MidiTok original ISMIR publication]
@inproceedings{miditok2021,
title={{MidiTok}: A Python package for {MIDI} file tokenization},
author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},
booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},
year={2021},
url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},
}
The BibTeX citations of all tokenizations can be found in the documentation
Special thanks to all the contributors. We acknowledge Aubay, the LIP6, LERIA and ESEO for the initial financing and support.