Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request for colab inference notebook #2

Open
GeorvityLabs opened this issue Jul 30, 2023 · 6 comments
Open

request for colab inference notebook #2

GeorvityLabs opened this issue Jul 30, 2023 · 6 comments

Comments

@GeorvityLabs
Copy link

@KeisukeToyama great work!
hope you could create a google colab inference notebook, where we could use the colab t4 gpu.
you can give the option for the user to load a .wav or .mp3 piano audio from their computer and then get the midi output using hFT-Transformer.

@KeisukeToyama
Copy link
Collaborator

Hello @GeorvityLabs
Thank you very much for the request.
I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work.
I appreciate your understanding.

@GeorvityLabs
Copy link
Author

Hello @GeorvityLabs
Thank you very much for the request.
I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work.
I appreciate your understanding.

If you could give some instructions, I can try to make one

@GeorvityLabs
Copy link
Author

Hello @GeorvityLabs
Thank you very much for the request.
I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work.
I appreciate your understanding.

If you could give some instructions, I can try to make one

Maybe once model checkpoint is released

@KeisukeToyama
Copy link
Collaborator

KeisukeToyama commented Aug 7, 2023

I'm wondering if you have read the README.md (https://github.com/sony/hFT-Transformer/blob/master/README.md)?
The evaluation step explains how we can transcribe audio. We've already shared the checkpoint, so I think you can try it by yourself.

@alex2awesome
Copy link

alex2awesome commented Nov 14, 2023

there are a lot of steps in preprocessing audio files that are solely for evaluation (e.g. conv_note2label.py), not for taking a new audio file and generating MIDI.

It would be great if you could include a script that we could run starting with a new dataset.

Or, if it as simple as excluding 1 line of EXE-CORPUS-MAESTRO.sh, that would be great, too, if you could let us know. Just really hard to read through such a large code base and understand immediately what is going on, where

@faltings
Copy link

faltings commented May 6, 2024

@GeorvityLabs Have you made one sample to load a .wav or .mp3 piano audio to get the midi output? Many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants