Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can someone provide magnet link to 30B model please? #120

Closed
thedmdim opened this issue Mar 22, 2023 · 5 comments
Closed

Can someone provide magnet link to 30B model please? #120

thedmdim opened this issue Mar 22, 2023 · 5 comments

Comments

@thedmdim
Copy link

I cannot download from huggingface cause the size is too big and condition speed is too slow

@erhathaway
Copy link

On, linux:

wget https://huggingface.co/Pi3141/alpaca-30B-ggml/resolve/main/ggml-model-q4_0.bin -O ggml-model-q4_0.bin

@Castaa
Copy link

Castaa commented Mar 23, 2023

magnet:?xt=urn:btih:6K5O4J7DCKAMMMAJHWXQU72OYFXPZQJG&dn=ggml-alpaca-30b-q4.bin&xl=20333638921&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce

I hope this magnet link works properly. I've never created one before This is the alpaca.cpp 30B 4-bit weight file. Same file downloaded from huggingface. Let me know if it doesn't work. Apologies if it doesn't. I can confirm it would be far faster to simply download it from huggingface if you have a choice.

Also, I can confirm the file itself works with the current version of alpaca.cpp. Which is good news.

If anyone is able to download the entire file, could you seed it for a few days because I fear my upload speed is quite slow. I have the jank tier xfinity.

Update: It's being seeded by multiple users now. So the torrent download speed should be much improved.

@mallory303
Copy link

Unfortunately I have problems with it
Anybody has idea what is the problem?

llama_model_load: loading model from 'ggml-model-q4_0.bin' - please wait ...
llama_model_load: ggml ctx size = 25631.50 MB
llama_model_load: memory_size = 6240.00 MB, n_mem = 122880
llama_model_load: loading model part 1/4 from 'ggml-model-q4_0.bin'
llama_model_load: llama_model_load: tensor 'tok_embeddings.weight' has wrong size in model file
main: failed to load model from 'ggml-model-q4_0.bin'

@jellomaster
Copy link

make sure you get the latest release and recompile.
this should fix things:
git pull
make chat

@mallory303
Copy link

It's works now, thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants