-
-
Notifications
You must be signed in to change notification settings - Fork 8.5k
[Core] tokens in queue metric #12286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
28429f8
to
29d9990
Compare
This could impact performance in a negative way when the number of requests in the waiting queue grows large. This list can grow arbitrarily large (e.g. if the user submits 20000 requests in batch mode, the list will be 20k in length). We should not iterate list in each engine step |
That makes sense to me, I think one option we have is to store and update a variable in the scheduler (eg. scheduler.py) and emit that variable as the current metric. So rather than adding an additional for loop, we take advantage of the required iteration via the scheduler. Note that we already have iteration logic through the requests in the waiting for for the existing num_prompt_tokens and num_generations_tokens metrics here, wondering if there's goals to improve these metrics in the future as well 🤔 : vllm/vllm/engine/llm_engine.py Line 1642 in 2cbeeda
|
Update from chat in slack, ty @robertgshaw2-redhat for the feedback! I'll update the logic to remove the for loop and store a var in scheduler.py |
29d9990
to
5d4c2d7
Compare
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Anna Pendleton <[email protected]>
5d4c2d7
to
75f2e32
Compare
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you! |
Metric for total tokens waiting in the queue
Goal is to measure how many tokens are waiting in the queue. Ideally this would avoid double counting when prefix caching is enabled.
Motivation to add this metric is I am currently prototyping an autoscaling strategy that takes sum of (total tokens in the kv cache + total tokens in the queue) into account.
Seeking feedback on if I've implemented this tokens in the queue metric appropriately, I attempted to duplicate logic related to the scheduler's decision to turn status from WAITING -> RUNNING
cc @robertgshaw2-redhat who has experience with tokens in the batch metric
cc @apatke who implemented priority scheduling logic for which part of tokens in queue is based on
cc @rkooo567 who implemented refactor for chunked prefill which changed part of scheduler decision to remove requests from the queue
cc @youkaichao who simplified scheduler code