Warning Tabby is still in the alpha phrase
An opensource / on-prem alternative to GitHub Copilot.
- Self-contained, with no need for a DBMS or cloud service
- Web UI for visualizing and configuration models and MLOps.
- OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
- Consumer level GPU supports (FP-16 weight loading with various optimization).
The easiest way of getting started is using the official docker image:
docker run \
-it --rm \
-v ./data:/data \
-v ./data/hf_cache:/root/.cache/huggingface \
-p 5000:5000 \
-p 8501:8501 \
-p 8080:8080 \
-e MODEL_NAME=TabbyML/J-350M \
tabbyml/tabby
You can then query the server using /v1/completions
endpoint:
curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
"prompt": "def binarySearch(arr, left, right, x):\n mid = (left +"
}'
To use the GPU backend (triton) for a faster inference speed, use deployment/docker-compose.yml
:
docker-compose up
Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.
We also provides an interactive playground in admin panel localhost:8501
See deployment/skypilot/README.md
Tabby opens an FastAPI server at localhost:5000, which embeds an OpenAPI documentation of the HTTP API.
Go to development
directory.
make dev
or
make dev-python # Turn off triton backend (for non-cuda env developers)
- Fine-tuning models on private code repository. #23
- Production ready (Open Telemetry, Prometheus metrics).