Skip to content

Commit

Permalink
Remove gputil dependency and minor cleanup (mlc-ai#15)
Browse files Browse the repository at this point in the history
  • Loading branch information
ehsanmok authored Mar 30, 2023
1 parent 2cbd64d commit ab1b1c0
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ This project takes a step to change that status quo and bring more diversity to
Building special client apps for those applications is one option (which we also support), but won’t it be even more amazing if we can simply open a browser and directly bring AI natively to your browser tab? There is some level of readiness in the ecosystem. WebAssembly allows us to port more lower-level runtimes onto the web. To solve the compute problem, WebGPU is getting matured lately and enables native GPU executions on the browser.

We are just seeing necessary elements coming together on the client side, both in terms of hardware and browser ecosystem. Still, there are big hurdles to cross, to name a few:

* We need to bring the models somewhere without the relevant GPU-accelerated Python frameworks.
* Most of the AI frameworks have a heavy reliance on optimized computed libraries that are maintained by hardware vendors. We need to start from zero. To get the maximum benefit, we might also need to produce variants per client environment.
* Careful planning of memory usage so we can fit the models into memory.
Expand All @@ -20,6 +21,7 @@ We do not want to only do it for just one model. Instead, we would like to prese
## Get Started

We have a [Jupyter notebook](https://github.com/mlc-ai/web-stable-diffusion/blob/main/walkthrough.ipynb) that walks you through all the stages, including

* elaborate the key points of web ML model deployment and how we do to meet these points,
* import the stable diffusion model,
* optimize the model,
Expand All @@ -28,6 +30,7 @@ We have a [Jupyter notebook](https://github.com/mlc-ai/web-stable-diffusion/blob
* deploy the model on web with WebGPU runtime.

If you want to go through these steps in command line, please follow the commands below:

<details><summary>Commands</summary>

* Install TVM Unity. You can either
Expand Down
1 change: 0 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
accelerate
diffusers
gputil
transformers
6 changes: 2 additions & 4 deletions walkthrough.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,7 @@
"!python3 -m pip install --pre torch --upgrade --index-url https://download.pytorch.org/whl/nightly/cpu\n",
"!python3 -m pip install -r requirements.txt\n",
"\n",
"import GPUtil\n",
"has_gpu = len(GPUtil.getGPUs()) > 0\n",
"has_gpu = !nvidia-smi -L\n",
"cudav = \"-cu116\" if has_gpu else \"\" # check https://mlc.ai/wheels if you have a different CUDA version\n",
"\n",
"!python3 -m pip install mlc-ai-nightly{cudav} -f https://mlc.ai/wheels"
Expand All @@ -54,7 +53,6 @@
"metadata": {},
"outputs": [],
"source": [
"from typing import Dict, List, Tuple\n",
"from platform import system\n",
"\n",
"import tvm\n",
Expand Down Expand Up @@ -1882,7 +1880,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.8.16"
}
},
"nbformat": 4,
Expand Down

0 comments on commit ab1b1c0

Please sign in to comment.