Skip to content
/ Yatai Public
forked from bentoml/Yatai

Model Deployment at Scale on Kubernetes 🦄️

License

Notifications You must be signed in to change notification settings

pluralsh/Yatai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦄️ Yatai: Production-first ML platform on Kubernetes

actions_status join_slack

Yatai is a production-first platform for your machine learning needs. It brings collaborative BentoML workflows to Kubernetes, helps ML teams to run model serving at scale, while simplifying model management and deployment across teams.

👉 Pop into our Slack community! We're happy to help with any issue you face or even just to meet you and hear what you're working on :)

Why Yatai?

  • Yatai accelerates the process of taking ML models from training stage to production and reduces the operational overhead of running a reliable model serving system.

  • Yatai simplifies collaboration between Data Science and Engineering teams. It is designed to leverage the BentoML standard and streamline production ML workflows.

  • Yatai is a cloud native platform with a wide range of integrations to best fit your infrastructure needs, and it is easily customizable for your CI/CD needs.

Core features:

  • Bento Registry - manage all your team's ML models via simple Web UI and API, and store ML assets on cloud blob storage
  • Deployment Automation - deploy Bentos as auto-scaling API endpoints on Kubernetes and easily rollout new versions
  • Observability - monitoring dashboard and logging integration helping users to identify model performance issues
  • CI/CD - flexible APIs for integrating with your training and CI pipelines

yatai-overview-page

See more product screenshots yatai-deployment-creation yatai-bento-repos yatai-model-detail yatai-cluster-components yatai-deployment-details yatai-activities

Getting Started

1. Install Yatai locally with Minikube
  • Prerequisites:
  • Start a minikube Kubernetes cluster: minikube start --cpus 4 --memory 4096
  • Enable ingress controller: minikube addons enable ingress
  • Install Yatai Helm Chart:
    helm repo add yatai https://bentoml.github.io/yatai-chart
    helm repo update
    helm install yatai yatai/yatai -n yatai-system --create-namespace
  • Verify installation
  • You can access the Yatai Web UI: http://{Yatai URL}/setup?token=. You can find the Yatai URL link and the token again using helm get notes yatai -n yatai-system command.
2. Get an API token and login BentoML CLI
  • Create a new API token in Yatai web UI: http://${Yatai URL}/api_tokens
  • Copy login command upon token creation and run as shell command, e.g.:
    bentoml yatai login --api-token {YOUR_TOKEN_GOES_HERE} --endpoint http://{Yatai URL}
3. Pushing Bento to Yatai
  • Train a sample ML model and build a Bento using code from the BentoML Quickstart Project:
    git clone https://github.com/bentoml/gallery.git && cd ./gallery/quickstart
    pip install -r ./requirements.txt
    python train.py
    bentoml build
  • Push your newly built Bento to Yatai:
    bentoml push iris_classifier:latest
4. Create your first deployment!
  • A Bento Deployment can be created via Web UI or via kubectl command:

    • Deploy via Web UI

      • Go to deployments page: http://{Yatai URL}/deployments
      • Click Create button and follow instructions on UI
    • Deploy directly via kubectl command:

      • Define your Bento deployment in a my_deployment.yaml file:
          apiVersion: serving.yatai.ai/v1alpha2
          kind: BentoDeployment
          metadata:
            name: my-bento-deployment
            namespace: my-namespace
          spec:
            bento_tag: iris_classifier:3oevmqfvnkvwvuqj
            ingress:
              enabled: true
            resources:
              limits:
                  cpu: "500m"
                  memory: "512m"
              requests:
                  cpu: "250m"
                  memory: "128m"
            autoscaling:
              max_replicas: 10
              min_replicas: 2
            runners:
            - name: iris_clf
              resources:
                limits:
                  cpu: "1000m"
                  memory: "1Gi"
                requests:
                  cpu: "500m"
                  memory: "512m"
                autoscaling:
                  max_replicas: 4
                  min_replicas: 1
      • Apply the deployment to your minikube cluster
        kubectl apply -f my_deployment.yaml
  • Monitor deployment process on Web UI and test out endpoint when deployment created

    curl \                                                                                                                                                      
        -X POST \
        -H "content-type: application/json" \
        --data "[[5, 4, 3, 2]]" \
        https://demo-default-yatai-127-0-0-1.apps.yatai.dev/classify
5. Moving to production
  • See Administrator's Guide for a comprehensive overview for deploying and configuring Yatai for production use.

Community

Contributing

There are many ways to contribute to the project:

  • If you have any feedback on the project, share it with the community in GitHub Discussions under the BentoML repo.
  • Report issues you're facing and "Thumbs up" on issues and feature requests that are relevant to you.
  • Investigate bugs and reviewing other developer's pull requests.
  • Contributing code or documentation to the project by submitting a GitHub pull request. See the development guide.

Licence

Elastic License 2.0 (ELv2)

About

Model Deployment at Scale on Kubernetes 🦄️

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 54.4%
  • Go 36.9%
  • CSS 2.9%
  • Shell 2.8%
  • PLpgSQL 0.9%
  • Nix 0.6%
  • Other 1.5%