Skip to content

iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh

License

Notifications You must be signed in to change notification settings

spirobel/bunny-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bunny-llama

What is this?

A bunny that sits on top of a llama (and controls it).

To run:

bun clone
bun make
bun ride.ts

To clean:

bun clean

To install dependencies:

bun install

(most likely you already have git and zig)

Install zig with the right version:

bun install -g @oven/zig

or update it as described here

Nvidia llama

For people with nvidia gpus:

install conda.

conda create -n bunny
conda activate bunny
conda install cuda -c nvidia

then make the llama with cuda, like so:

bun clone
bun make-cuda
bun ride.ts

now you have a special cuda enabled llama.

if you closed your shell and you want to build the cuda llama again, you need to activate the conda environment first:

conda activate bunny
bun make-cuda

About

iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published