Popular repositories Loading
-
-
localGPT
localGPT PublicForked from PromtEngineer/localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Python
-
llama2.c
llama2.c PublicForked from karpathy/llama2.c
Inference Llama 2 in one file of pure C
Python
-
live
live PublicForked from fanmingming/live
✯ 一个国内可直连的直播源分享项目 ✯ 🔕 永久免费 直连访问 完整开源 不含广告 完善的台标 直播源支持IPv4/IPv6双栈访问 🔕
HTML
-
llama2-webui
llama2-webui PublicForked from liltom-eth/llama2-webui
Run Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference.
Python
-
Llama-2-Open-Source-LLM-CPU-Inference
Llama-2-Open-Source-LLM-CPU-Inference PublicForked from kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
Python
If the problem persists, check the GitHub status page or contact support.