We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama.cpp 30B runs with only 6GB of RAM now :
ggerganov#613
The text was updated successfully, but these errors were encountered:
It appears that this was just a misreading of how memory usage was being reported.
Take a look at jart's reply. You still need 20GB of RAM.
https://news.ycombinator.com/item?id=35400066
Sorry, something went wrong.
please make clear 20GB RAM from GPU or motherboard?
@Openaicn Llama.cpp runs on the CPU.
No branches or pull requests
Llama.cpp 30B runs with only 6GB of RAM now :
ggerganov#613
The text was updated successfully, but these errors were encountered: