Starwhale is an MLOps platform. It provides Instance, Project, Runtime, Model, and Dataset.
-
Instance: Each installation of Starwhale is called an instance.
- π» Standalone Instance: The simplest form that requires only the Starwhale Client(
swcli
).swcli
is written by pure python3. - π On-Premises Instance: Cloud form, we call it private cloud instance. Kubernetes and BareMetal both meet the basic environmental requirements.
- βοΈ Cloud Hosted Instance: Cloud form, we call it public cloud instance. Starwhale team maintains the web service.
Starwhale tries to keep concepts consistent across different types of instances. In this way, people can easily exchange data and migrate between them.
- π» Standalone Instance: The simplest form that requires only the Starwhale Client(
-
Project: The basic unit for organizing different resources.
-
ML Basic Elements: The Machine Learning/Deep Learning running environments or artifacts. Starwhale empowers the ML/DL essential elements with packaging, versioning, reproducibility, and shareability.
- π Runtime: Software dependencies description to "run" a model, which includes python libraries, native libraries, native binaries, etc.
- π Model: The standard model format used in model delivery.
- π« Dataset: A unified description of how the data and labels are stored and organized. Starwhale datasets can be loaded efficiently.
-
Running Fundamentals: Starwhale uses Job, Step, and Task to execute ML/DL actions like model trainingοΌ evaluation, and serving. Starwhale's Controller-Agents structure scales out easily.
- π₯ Job: A set of programs to do specific work. Each job consists of one or more steps.
- π΅ Step: Represents distinct stages of the work. Each step consists of one or more tasks.
- π₯ Task: Operation entity. Tasks are in some specific steps.
-
Scenarios: Starwhale provides the best practice and out-of-the-box for different ML/DL scenarios.
- π Model Training(TBD): Use Starwhale Python SDK to record experiment meta, metric, log, and artifact.
- π₯οΈ Model Evaluation:
PipelineHandler
and some report decorators can give you complete, helpful, and user-friendly evaluation reports with only a few lines of codes. - π« Model Serving(TBD): Starwhale Model can be deployed as a web service or stream service in production with deployment capability, observability, and scalability. Data scientists do not need to write ML/DL irrelevant codes.
- You can try it in Google Colab
- Or run example/mnist/notebook.ipynb locally using vscode or jupyterlab
-
π° STEP1: Installing Starwhale
python3 -m pip install --pre starwhale
-
π΅ STEP2: Downloading the MNIST example
git clone https://github.com/star-whale/starwhale.git
-
β STEP3: Building a runtime
cd example/runtime/pytorch swcli runtime build . swcli runtime list swcli runtime info pytorch/version/latest
-
π STEP4: Building a model
- Enter
example/mnist
directory:
cd ../../mnist
- Write some code with Starwhale Python SDK. Complete code is here.
import typing as t import torch from starwhale import Image, PipelineHandler, PPLResultIterator, multi_classification class MNISTInference(PipelineHandler): def __init__(self) -> None: super().__init__() self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model = self._load_model(self.device) def ppl(self, img: Image, **kw: t.Any) -> t.Tuple[t.List[int], t.List[float]]: data_tensor = self._pre(img) output = self.model(data_tensor) return self._post(output) @multi_classification( confusion_matrix_normalize="all", show_hamming_loss=True, show_cohen_kappa_score=True, show_roc_auc=True, all_labels=[i for i in range(0, 10)], ) def cmp( self, ppl_result: PPLResultIterator ) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]: result, label, pr = [], [], [] for _data in ppl_result: label.append(_data["annotations"]["label"]) result.extend(_data["result"][0]) pr.extend(_data["result"][1]) return label, result, pr def _pre(self, input:bytes): """write some mnist preprocessing code""" def _post(self, input:bytes): """write some mnist post-processing code""" def _load_model(): """load your pre trained model"""
- Define
model.yaml
.
name: mnist model: - models/mnist_cnn.pt config: - config/hyperparam.json run: handler: mnist.evaluator:MNISTInference
- Run one command to build the model.
swcli model build . swcli model info mnist/version/latest
- Enter
-
πΊ STEP5: Building a dataset
- Download MNIST RAW data files.
mkdir -p data && cd data wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz gzip -d *.gz cd .. ls -lah data/*
- Write some code with Starwhale Python SDK. Full code is here.
import struct import typing as t from pathlib import Path from starwhale import BuildExecutor class DatasetProcessExecutor(SWDSBinBuildExecutor): def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]: root_dir = Path(__file__).parent.parent / "data" with (root_dir / "t10k-images-idx3-ubyte").open("rb") as data_file, ( root_dir / "t10k-labels-idx1-ubyte" ).open("rb") as label_file: _, data_number, height, width = struct.unpack(">IIII", data_file.read(16)) _, label_number = struct.unpack(">II", label_file.read(8)) print( f">data({data_file.name}) split data:{data_number}, label:{label_number} group" ) image_size = height * width for i in range(0, min(data_number, label_number)): _data = data_file.read(image_size) _label = struct.unpack(">B", label_file.read(1))[0] yield GrayscaleImage( _data, display_name=f"{i}", shape=(height, width, 1), ), {"label": _label}
- Define
dataset.yaml
.
name: mnist handler: mnist.dataset:DatasetProcessExecutor attr: alignment_size: 1k volume_size: 4M data_mime_type: "x/grayscale"
- Run one command to build the dataset.
swcli dataset build . swcli dataset info mnist/version/latest
-
π STEP6: Running an evaluation job
swcli -vvv eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest swcli eval list swcli eval info ${version}
π Now, you have completed the fundamental steps for Starwhale standalone.
Let's go ahead and finish the tutorial on the on-premises instance.
-
π° STEP1: Install minikube and helm
-
π΅ STEP2: Start minikube
minikube start
For users in the mainland of China, please add these startup parametersοΌ
--image-mirror-country=cn --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
. If there is no kubectl bin in your machine, you may useminikube kubectl
oralias kubectl="minikube kubectl --"
alias command. -
π΅ STEP3: Installing Starwhale
helm repo add starwhale https://star-whale.github.io/charts helm repo update helm install --devel my-starwhale starwhale/starwhale -n starwhale --create-namespace --set minikube.enabled=true
After the installation is successful, the following prompt message appears:
NAME: my-starwhale LAST DEPLOYED: Thu Jun 23 14:48:02 2022 NAMESPACE: starwhale STATUS: deployed REVISION: 1 NOTES: ****************************************** Chart Name: starwhale Chart Version: 0.3.0 App Version: 0.3.0 ... Port Forward Visit: - starwhale controller: - run: kubectl port-forward --namespace starwhale svc/my-starwhale-controller 8082:8082 - visit: http://localhost:8082 - minio admin: - run: kubectl port-forward --namespace starwhale svc/my-starwhale-minio 9001:9001 - visit: http://localhost:9001 - mysql: - run: kubectl port-forward --namespace starwhale svc/my-starwhale-mysql 3306:3306 - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale ****************************************** Login Info: - starwhale: u:starwhale, p:abcd1234 - minio admin: u:minioadmin, p:minioadmin *_* Enjoy using Starwhale. *_*
Then keep checking the minikube service status until all pods are running.
kubectl get pods -n starwhale
NAME READY STATUS RESTARTS AGE my-starwhale-controller-7d864558bc-vxvb8 1/1 Running 0 1m my-starwhale-minio-7d45db75f6-7wq9b 1/1 Running 0 2m my-starwhale-mysql-0 1/1 Running 0 2m Make the Starwhale controller accessible locally with the following command:
kubectl port-forward --namespace starwhale svc/my-starwhale-controller 8082:8082
-
β STEP4: Upload the artifacts to the cloud instance
pre-prepared artifacts Before starting this tutorial, the following three artifacts should already exist on your machineοΌ
- a starwhale model named mnist
- a starwhale dataset named mnist
- a starwhale runtime named pytorch
The above three artifacts are what we built on our machine using starwhale.
-
Use swcli to operate the remote server First, log in to the server:
swcli instance login --username starwhale --password abcd1234 --alias dev http://localhost:8082
-
Start copying the model, dataset, and runtime that we constructed earlier:
swcli model copy mnist/version/latest dev/project/starwhale swcli dataset copy mnist/version/latest dev/project/starwhale swcli runtime copy pytorch/version/latest dev/project/starwhale
-
π STEP5: Use the web UI to run an evaluation
-
Log in Starwhale instance: let's use the username(starwhale) and password(abcd1234) to open the server web UI(http://localhost:8082/).
-
Then, we will see the project named 'project_for_mnist' that we created earlier with swcli. Click the project name, you will see the model, runtime, and dataset uploaded in the previous step.
-
Create and view an evaluation job
-
Congratulations! You have completed the evaluation process for a model.
-
Visit Starwhale HomePage.
-
More information in the official documentation.
-
For general questions and support, join the Slack.
-
For bug reports and feature requests, please use Github Issue.
-
To get community updates, follow @starwhaleai on Twitter.
-
For Starwhale artifacts, please visit:
- Python Package on Pypi.
- Helm Charts on Artifacthub.
- Docker Images on Docker Hub and ghcr.io.
-
Additionally, you can always find us at [email protected].
πΌπPRs are always welcomed ππΊ. See Contribution to Starwhale for more details.
Starwhale is licensed under the Apache License 2.0.