Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use remote ai model for Tabby itself #3764

Open
itforxp opened this issue Jan 26, 2025 · 7 comments
Open

use remote ai model for Tabby itself #3764

itforxp opened this issue Jan 26, 2025 · 7 comments
Labels
enhancement New feature or request

Comments

@itforxp
Copy link

itforxp commented Jan 26, 2025

I have rent GPU server and want point local installed Tabby to that server. Is it possible at the moment?
Something like :
export TABBY_BACKEND_LLAMA=http://<REMOTE_MODEL_URL>
export TABBY_BACKEND_AUTHORIZATION="Bearer <YOUR_API_KEY>"
docker run -it -p 8080:8080 -v $HOME/.tabby:/data registry.tabbyml.com/tabbyml/tabby server --remote


Please reply with a 👍 if you want this feature.

@itforxp itforxp added the enhancement New feature or request label Jan 26, 2025
@wsxiaoys
Copy link
Member

Hi - https://tabby.tabbyml.com/docs/references/models-http-api/llama.cpp/ contains an example connecting tabby to a remote model HTTP server

@itforxp
Copy link
Author

itforxp commented Jan 27, 2025

thanks. how do i run tabby to use the remote model instead of the local one. the current tabby binary options force me to run the local model

@itforxp
Copy link
Author

itforxp commented Jan 28, 2025

also i see :

~/.tabby/config.toml

[model.completion.http]
kind = "llama.cpp/completion"
api_endpoint = "http://localhost:8888"
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>" 

how can i provide Bearer Token to authorize on remote ollama side?

@wsxiaoys
Copy link
Member

You could set api_key field for authorization. https://tabby.tabbyml.com/docs/references/models-http-api/openai/ contains relevant examples.

@itforxp
Copy link
Author

itforxp commented Jan 29, 2025

Thanks! But what about : how to disable local tabby's served ai model? I tried to kill local tabby model, but it restarted rapidly

@wsxiaoys
Copy link
Member

Tabby initiates three default models if they are not configured remotely. For more information, please refer to our documentation at: https://tabby.tabbyml.com/docs/administration/model/

Could you please confirm if you have set up all three models?

@itforxp
Copy link
Author

itforxp commented Feb 4, 2025

Image
i still have only local models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants