forked from abetlen/llama-cpp-python
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
18 files
+2 −0 | .github/workflows/build.yml | |
+8 −9 | .github/workflows/server.yml | |
+2 −0 | .gitignore | |
+38 −32 | CMakeLists.txt | |
+8 −7 | Makefile | |
+13 −0 | common/common.cpp | |
+1 −0 | common/common.h | |
+26 −8 | examples/embedding/embedding.cpp | |
+6 −20 | examples/gritlm/gritlm.cpp | |
+13 −5 | examples/llava/README.md | |
+8 −1 | examples/server/tests/features/steps/steps.py | |
+33 −16 | ggml-metal.m | |
+0 −3 | ggml-metal.metal | |
+32 −32 | ggml.h | |
+6 −0 | gguf-py/gguf/constants.py | |
+9 −0 | gguf-py/gguf/gguf_reader.py | |
+12 −4 | gguf-py/gguf/gguf_writer.py | |
+20 −12 | llama.cpp |