You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just wanted to know how can I compile these models from graph to image (299x299)? I know I can just download from your S3 bucket but I don't want to increase your network costs since I will be deploying the same to cluster of servers so I was looking for an optimum solution that could help me.
Also, I just noticed that you've deployed a new model (mobilenetMid) with similar accuracy but decreased size, any tips on how to compile new model for backend (nodejs) use?
The text was updated successfully, but these errors were encountered:
I just wanted to know how can I compile these models from graph to image (299x299)? I know I can just download from your S3 bucket but I don't want to increase your network costs since I will be deploying the same to cluster of servers so I was looking for an optimum solution that could help me.
Basically, I am using nsfwjs with type: graph: nsfwjs.load('/path/to/different/model/', { type: 'graph' })
This process takes upto 150MB/worker in my nodejs application which is a lot tbh.
I just want to know how can I load model using size: 299x299 option. I am hoping that this will significantly reduce the memory usage.
I do understand that size option won't work with https://github.com/GantMan/nsfw_model/releases/tag/1.1.0 models directly and I might need to convert these models before making it work so I just wanted to know how can I convert these models from graph to image?
Hey,
Thank you for all the efforts on this library.
So I downloaded the models from https://github.com/GantMan/nsfw_model/releases/tag/1.1.0 and was able to use them with nsfwjs using
graph
option.I just wanted to know how can I compile these models from graph to image (299x299)? I know I can just download from your S3 bucket but I don't want to increase your network costs since I will be deploying the same to cluster of servers so I was looking for an optimum solution that could help me.
Also, I just noticed that you've deployed a new model (mobilenetMid) with similar accuracy but decreased size, any tips on how to compile new model for backend (nodejs) use?
The text was updated successfully, but these errors were encountered: