Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

怎么返回输出token的prob呢? #6980

Open
1 task done
PapaMadeleine2022 opened this issue Feb 18, 2025 · 0 comments
Open
1 task done

怎么返回输出token的prob呢? #6980

PapaMadeleine2022 opened this issue Feb 18, 2025 · 0 comments
Labels
enhancement New feature or request pending This problem is yet to be addressed

Comments

@PapaMadeleine2022
Copy link

Reminder

  • I have read the above rules and searched the existing issues.

Description

如题,使用vllm_infer.py进行推理时,怎么返回输出token的prob呢?results = LLM(**engine_args).generate(inputs, sampling_params, lora_request=lora_request) 这个函数应该怎么修改呢?
求助,急急急~~~

Pull Request

No response

@PapaMadeleine2022 PapaMadeleine2022 added enhancement New feature or request pending This problem is yet to be addressed labels Feb 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

1 participant