Skip to content

[Core] tokens in queue metric #12286

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

annapendleton
Copy link
Contributor

@annapendleton annapendleton commented Jan 21, 2025

Metric for total tokens waiting in the queue

Goal is to measure how many tokens are waiting in the queue. Ideally this would avoid double counting when prefix caching is enabled.

Motivation to add this metric is I am currently prototyping an autoscaling strategy that takes sum of (total tokens in the kv cache + total tokens in the queue) into account.

Seeking feedback on if I've implemented this tokens in the queue metric appropriately, I attempted to duplicate logic related to the scheduler's decision to turn status from WAITING -> RUNNING

cc @robertgshaw2-redhat who has experience with tokens in the batch metric
cc @apatke who implemented priority scheduling logic for which part of tokens in queue is based on
cc @rkooo567 who implemented refactor for chunked prefill which changed part of scheduler decision to remove requests from the queue
cc @youkaichao who simplified scheduler code

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@annapendleton annapendleton force-pushed the tokens-in-queue branch 2 times, most recently from 28429f8 to 29d9990 Compare January 22, 2025 20:45
@robertgshaw2-redhat
Copy link
Collaborator

This could impact performance in a negative way when the number of requests in the waiting queue grows large. This list can grow arbitrarily large (e.g. if the user submits 20000 requests in batch mode, the list will be 20k in length). We should not iterate list in each engine step

@annapendleton
Copy link
Contributor Author

annapendleton commented Jan 23, 2025

This could impact performance in a negative way when the number of requests in the waiting queue grows large. This list can grow arbitrarily large (e.g. if the user submits 20000 requests in batch mode, the list will be 20k in length). We should not iterate list in each engine step

That makes sense to me, I think one option we have is to store and update a variable in the scheduler (eg. scheduler.py) and emit that variable as the current metric. So rather than adding an additional for loop, we take advantage of the required iteration via the scheduler.

Note that we already have iteration logic through the requests in the waiting for for the existing num_prompt_tokens and num_generations_tokens metrics here, wondering if there's goals to improve these metrics in the future as well 🤔 :

for idx, scheduled_seq_group in enumerate(

@annapendleton
Copy link
Contributor Author

Update from chat in slack, ty @robertgshaw2-redhat for the feedback! I'll update the logic to remove the for loop and store a var in scheduler.py

Copy link

mergify bot commented Feb 5, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @annapendleton.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 5, 2025
Signed-off-by: Anna Pendleton <[email protected]>
Copy link

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label May 12, 2025
Copy link

This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!

@github-actions github-actions bot closed this Jun 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-rebase stale Over 90 days of inactivity
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants