Skip to content

Synthica-ai/tabby

 
 

Repository files navigation

🐾 Tabby

License Code style: black Docker build status Docker pulls

Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.

Warning Tabby is still in the alpha phase

Features

  • Self-contained, with no need for a DBMS or cloud service
  • Web UI for visualizing and configuration models and MLOps.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Consumer level GPU supports (FP-16 weight loading with various optimization).

Demo

Open in Spaces

Demo

Get started: Server

Docker

We recommend adding the following aliases to your .bashrc or .zshrc file:

# Save aliases to bashrc / zshrc
alias tabby="docker run -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"

# Alias for GPU (requires NVIDIA Container Toolkit)
alias tabby-gpu="docker run --gpus all -u $(id -u) -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby"

After adding these aliases, you can use the tabby command as usual. Here are some examples of its usage:

# Usage
tabby --help

# Serve the model
tabby serve --model TabbyML/J-350M

Getting Started: Client

We offer multiple methods to connect to Tabby Server, including using OpenAPI and editor extensions.

API

Tabby has opened a FastAPI server at localhost:8080, which includes an OpenAPI documentation of the HTTP API. The same API documentation is also hosted at https://tabbyml.github.io/tabby

Editor Extensions

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 55.8%
  • Python 15.9%
  • Rust 10.9%
  • Vim Script 6.2%
  • JavaScript 3.8%
  • Shell 1.6%
  • Other 5.8%