Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Warning Tabby is still in the alpha phase
- Self-contained, with no need for a DBMS or cloud service
- Web UI for visualizing and configuration models and MLOps.
- OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
- Consumer level GPU supports (FP-16 weight loading with various optimization).
We recommend adding the following aliases to your .bashrc
or .zshrc
file:
# Save aliases to bashrc / zshrc
alias tabby="docker run -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
# Alias for GPU (requires NVIDIA Container Toolkit)
alias tabby-gpu="docker run --gpus all -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"
After adding these aliases, you can use the tabby
command as usual. Here are some examples of its usage:
# Usage
tabby --help
# Serve the model
tabby serve --model TabbyML/J-350M
We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.
Tabby has opened a FastAPI server at localhost:8080, which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby
- VSCode Extension – Install from the marketplace, or open-vsx.org
- VIM Extension