-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use remote ai model for Tabby itself #3764
Comments
Hi - https://tabby.tabbyml.com/docs/references/models-http-api/llama.cpp/ contains an example connecting tabby to a remote model HTTP server |
thanks. how do i run tabby to use the remote model instead of the local one. the current tabby binary options force me to run the local model |
also i see :
how can i provide Bearer Token to authorize on remote ollama side? |
You could set |
Thanks! But what about : how to disable local tabby's served ai model? I tried to kill local tabby model, but it restarted rapidly |
Tabby initiates three default models if they are not configured remotely. For more information, please refer to our documentation at: https://tabby.tabbyml.com/docs/administration/model/ Could you please confirm if you have set up all three models? |
I have rent GPU server and want point local installed Tabby to that server. Is it possible at the moment?
Something like :
export TABBY_BACKEND_LLAMA=http://<REMOTE_MODEL_URL>
export TABBY_BACKEND_AUTHORIZATION="Bearer <YOUR_API_KEY>"
docker run -it -p 8080:8080 -v $HOME/.tabby:/data registry.tabbyml.com/tabbyml/tabby server --remote
Please reply with a 👍 if you want this feature.
The text was updated successfully, but these errors were encountered: