-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation reproducing issues #12
Comments
Please note that when I changed the model to google/gemma-2b then the accuracy was computing. I think it's the problem with the few open-source models. Can you point to any corrections in the codebase that can handle it? |
Thank you for reaching out and for your efforts in reproducing the results.
We appreciate your understanding and patience as we work through these challenges. |
Thank you for your response.
|
|
|
Thanks for the great work. I'm trying to reproduce the results and facing following errors:
Can I use lm-evaluation-harness script instead of yours to evaluate the results? When I used lm-harness ammlu dataset, I got 34.1 accuracy as compared to yours 37. What could be the difference?
How to use this script for another model's evaluation?
i. When I changed the model to jais-13b it gave 0% accuracy on Ammlu (all the responses are empty string).
ii. On any other model such as Phi-2, MobiLlama-1B, I am getting the following error:
below are the changes I made to config.yaml:
and in ArabicMMLU_few_shots.sh, I changed the model id to Phi-2B-base. Can you please tell me the solution of this?
The text was updated successfully, but these errors were encountered: