You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you please provide a working code for inference?
I tried working around the code from eval/model_vqa.py but struggle to get the working model out of it.
I did not try the float16, because I am getting out of memory error on my L4, while 8bit is not working and I am getting an error on merging stage.
I tried merging on CPU, save and load in 8bit, then, but getting rubbish results from the model.
Also, you have hardcoded paths to base model and vision encoder in your config.
It would be very helpful if you provided your version.
The text was updated successfully, but these errors were encountered:
Could you please provide a working code for inference?
I tried working around the code from eval/model_vqa.py but struggle to get the working model out of it.
I did not try the float16, because I am getting out of memory error on my L4, while 8bit is not working and I am getting an error on merging stage.
I tried merging on CPU, save and load in 8bit, then, but getting rubbish results from the model.
Also, you have hardcoded paths to base model and vision encoder in your config.
It would be very helpful if you provided your version.
The text was updated successfully, but these errors were encountered: