Skip to content

Commit

Permalink
Fix organization name for prometheus model
Browse files Browse the repository at this point in the history
  • Loading branch information
scottsuk0306 committed May 1, 2024
1 parent 5f807c2 commit 7437527
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ python eval_article_quality.py --input-path ../FreshWiki/topic_list.csv --gt-dir
#### Use the Metric Yourself
The similarity-based metrics (i.e., ROUGE, entity recall, and heading entity recall) are implemented in [eval/metrics.py](eval/metrics.py).

For rubric grading, we use the [prometheus-13b-v1.0](https://huggingface.co/kaist-ai/prometheus-13b-v1.0) introduced in [this paper](https://arxiv.org/abs/2310.08491). [eval/evaluation_prometheus.py](eval/evaluation_prometheus.py) provides the entry point of using the metric.
For rubric grading, we use the [prometheus-13b-v1.0](https://huggingface.co/prometheus-eval/prometheus-13b-v1.0) introduced in [this paper](https://arxiv.org/abs/2310.08491). [eval/evaluation_prometheus.py](eval/evaluation_prometheus.py) provides the entry point of using the metric.

</details>

Expand Down
4 changes: 2 additions & 2 deletions eval/eval_article_quality.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,8 +176,8 @@ def main(args):

parser.add_argument('--tokenizer', default="meta-llama/Llama-2-7b-chat-hf")
parser.add_argument('--model',
choices=["kaist-ai/prometheus-13b-v1.0", "kaist-ai/prometheus-7b-v1.0"],
default="kaist-ai/prometheus-13b-v1.0",
choices=["prometheus-eval/prometheus-13b-v1.0", "prometheus-eval/prometheus-7b-v1.0"],
default="prometheus-eval/prometheus-13b-v1.0",
help="Model to use for rubric evaluation.")
args = parser.parse_args()

Expand Down
6 changes: 3 additions & 3 deletions eval/evaluation_prometheus.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,9 +171,9 @@ def main(args):

parser.add_argument('--tokenizer', default="meta-llama/Llama-2-7b-chat-hf")
parser.add_argument('--model',
choices=["kaist-ai/prometheus-13b-v1.0", "kaist-ai/prometheus-7b-v1.0"],
default="kaist-ai/prometheus-13b-v1.0",
help="Model to use; options are 'kaist-ai/prometheus-13b-v1.0' or 'kaist-ai/prometheus-7b-v1.0'")
choices=["prometheus-eval/prometheus-13b-v1.0", "prometheus-eval/prometheus-7b-v1.0"],
default="prometheus-eval/prometheus-13b-v1.0",
help="Model to use; options are 'prometheus-eval/prometheus-13b-v1.0' or 'prometheus-eval/prometheus-7b-v1.0'")
parser.add_argument('--disable_sample', action='store_true', help='Whether to disable sampling; default is False')
parser.add_argument('--temperature', type=float, default=0.01, help='Temperature for generation; default is 0.01')
parser.add_argument('--top_p', type=float, default=0.95, help='Top P for generation; default is 0.95')
Expand Down

0 comments on commit 7437527

Please sign in to comment.