-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
request for colab inference notebook #2
Comments
Hello @GeorvityLabs |
If you could give some instructions, I can try to make one |
Maybe once model checkpoint is released |
I'm wondering if you have read the README.md (https://github.com/sony/hFT-Transformer/blob/master/README.md)? |
there are a lot of steps in preprocessing audio files that are solely for evaluation (e.g. conv_note2label.py), not for taking a new audio file and generating MIDI. It would be great if you could include a script that we could run starting with a new dataset. Or, if it as simple as excluding 1 line of EXE-CORPUS-MAESTRO.sh, that would be great, too, if you could let us know. Just really hard to read through such a large code base and understand immediately what is going on, where |
@GeorvityLabs Have you made one sample to load a .wav or .mp3 piano audio to get the midi output? Many thanks! |
@KeisukeToyama great work!
hope you could create a google colab inference notebook, where we could use the colab t4 gpu.
you can give the option for the user to load a .wav or .mp3 piano audio from their computer and then get the midi output using hFT-Transformer.
The text was updated successfully, but these errors were encountered: