Python package to tokenize MIDI music files, presented at the ISMIR 2021 LBD.
MidiTok converts MIDI music files into sequences of tokens, ready to be fed to sequential deep learning models like Transformers. MidiTok features most known MIDI tokenizations (e.g. REMI, Compound Word...), and is built around the idea that they all share common parameters and methods. It contains methods allowing to properly pre-process any MIDI file, and also supports Byte Pair Encoding (BPE).
Documentation: miditok.readthedocs.com
pip install miditok
MidiTok uses MIDIToolkit, which itself uses Mido to read and write MIDI files, and BPE is backed by Hugging Face 🤗tokenizers for super fast encoding.
The most basic and useful methods are summarized here.
from miditok import REMI
from miditok.utils import get_midi_programs
from miditoolkit import MidiFile
from pathlib import Path
# Creates the tokenizer and loads a MIDI
tokenizer = REMI() # using the default parameters, read the documentation to customize your tokenizer
midi = MidiFile('path/to/your_midi.mid')
# Converts MIDI to tokens, and back to a MIDI
tokens = tokenizer(midi) # automatically detects MIDIs and tokens before converting
converted_back_midi = tokenizer(tokens, get_midi_programs(midi)) # PyTorch / Tensorflow / Numpy tensors supported
# Converts MIDI files to tokens saved as JSON files
midi_paths = list(Path('path', 'to', 'dataset').glob('**/*.mid'))
data_augmentation_offsets = [2, 2, 1] # data augmentation on 2 pitch octaves, 2 velocity and 1 duration values
tokenizer.tokenize_midi_dataset(midi_paths, Path('path', 'to', 'tokens_noBPE'),
data_augment_offsets=data_augmentation_offsets)
# Constructs the vocabulary with BPE, from the tokenized files
tokenizer.learn_bpe(
vocab_size=500,
tokens_paths=list(Path('path', 'to', 'tokens_noBPE').glob("**/*.json")),
start_from_empty_voc=False,
)
# Converts the tokenized musics into tokens with BPE
tokenizer.apply_bpe_to_dataset(Path('path', 'to', 'tokens_noBPE'), Path('path', 'to', 'tokens_BPE'))
MidiTok implements the tokenizations: (links to original papers)
You can find short presentations in the documentation.
Tokenizations using Bar tokens (REMI, Compound Word and MuMIDI) only considers a 4/x time signature for now. This means that each bar is considered covering 4 beats.
Contributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. Just make sure your modifications pass the tests, and format your code with black.
- Time Signature
- Control Change messages
- Option to represent pitch values as pitch intervals, as it seems to improve performances.
- Speeding up MIDI read / load (Rust / C++ binding)
- Data augmentation on duration values at the MIDI level
If you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. ❤️
@inproceedings{miditok2021,
title={{MidiTok}: A Python package for {MIDI} file tokenization},
author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},
booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},
year={2021},
url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},
}
The BibTeX citations of all tokenizations can be found in the documentation
We acknowledge Aubay, the LIP6, LERIA and ESEO for the financing and support of this project. Special thanks to all the contributors.