Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MedHelm: Add VQA-RAD scenario and specs #3246

Open
wants to merge 14 commits into
base: med-helm
Choose a base branch
from
Open
Prev Previous commit
Next Next commit
improve(vlm): add max tokens to short answer
  • Loading branch information
Leonardo Schettini committed Dec 24, 2024
commit aabe0feb86b7ecb5c8fa4e9fce83d2c056e79669
3 changes: 2 additions & 1 deletion src/helm/benchmark/run_specs/vlm_run_specs.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,15 @@ def _get_generation_adapter_spec(

def _get_short_answer_generation_adapter_spec(
instructions: Optional[str] = None,
max_tokens: Optional[int] = None,
) -> AdapterSpec:
return _get_generation_adapter_spec(
instructions=(
"Just give a short answer without answering in a complete sentence."
if instructions is None
else instructions
),
max_tokens=20,
max_tokens=20 if max_tokens is None else max_tokens,
)


Expand Down