You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just an issue to track some maybe nice to have TODOs:
make a script that automatically downloads (or shows how to download) the correct FM pretrained weights from HF, also take care of the torch.hub that they might not fill up the home directory
In the config.py file the model path has to be part of another env variable, because users will have them in different locations and shouldn't need to edit them, actually change again and use a url download to remove hf_upload
add F1 Score as accuracy metric which is more informative than just accuracy
Up to discussion, whether config.py as opposed to one large file, could be split into yaml files for example that can be merged with OmegaConf
Add simple sphinx documentation to include some tutorials
UperHead should have the same configuration across FMs for fair comparison?
Make additional base classes for classification and segmentation task to remove code duplication and ensure common configuration and metrics across FM encoders
add the option for full finetine or last layer (or decoder only) and for for more specialized cases expect the user to subclass and overwrite the params_to_optimize function
The text was updated successfully, but these errors were encountered:
Just an issue to track some maybe nice to have TODOs:
config.py
file the model path has to be part of another env variable, because users will have them in different locations and shouldn't need to edit them, actually change again and use a url download to remove hf_uploadconfig.py
as opposed to one large file, could be split into yaml files for example that can be merged withOmegaConf
ruff
and add it as CI linter style test #3params_to_optimize
functionThe text was updated successfully, but these errors were encountered: