Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve GGUF metadata handling #6082

Closed
wants to merge 3 commits into from
Closed

improve GGUF metadata handling #6082

wants to merge 3 commits into from

Conversation

ddh0
Copy link

@ddh0 ddh0 commented Jun 2, 2024

  • Add support for GGUFv2 (previously only GGUFv3 was supported)
  • Support execution of big-endian GGUF files on big-endian hosts
  • Fail if magic bytes b'GGUF' are wrong
  • Continue to support arrays of metadata (such as tokenizer.ggml.tokens, etc for v2 and v3)

Verified working with multiple GGUFv2 and v3 models

@oobabooga I hope this contribution is helpful :)

- Support GGUFv2 (previously only v3 was supported)
- Support execution of big-endian GGUF files on big-endian hosts
- Fail if magic bytes b'GGUF' are wrong
@oobabooga
Copy link
Owner

I prefer to not increase the complexity of the code base to handle upstream breaking changes in llama.cpp. If GGUFv3 is the current standard, let's stick with GGUFv3.

Discarding those backward compatibility changes, are there any other modifications that you still find worth adding in this PR?

@ddh0
Copy link
Author

ddh0 commented Jun 27, 2024

Discarding those backward compatibility changes, are there any other modifications that you still find worth adding in this PR?

It allows execution of big-endian gguf files on big-endian hosts, instead of only allowing files to be executed on little-endian machines

EDIT: And also failing if the magic bytes b'GGUF' are wrong

@oobabooga
Copy link
Owner

Could you keep only those changes?

@ddh0
Copy link
Author

ddh0 commented Jun 27, 2024

Could you keep only those changes?

Done 👍

@oobabooga
Copy link
Owner

@ddh0 sorry for not having reviewed this earlier -- I found that even after removing the GGUFv2 support code, the changes were still nontrivial.

My take is that, while technically correct, these changes concern edge cases that are unlikely to happen in practice:

  • Big-endian systems running ML models are essentially non-existent today
  • The magic bytes check protects against a very unlikely error scenario

I want to keep the GGUF reader simple and focused on common cases, so more people can understand it.

Don't take that as a dismissal - I admire your knowledge of low level programming and am excited about your work on easy-llama!

@oobabooga oobabooga closed this Jan 9, 2025
@ddh0
Copy link
Author

ddh0 commented Jan 9, 2025

Hey no worries, that's totally fair. Thanks for taking the time anyway 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants