Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] tokens in queue metric #12286

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

annapendleton
Copy link

@annapendleton annapendleton commented Jan 21, 2025

Metric for total tokens waiting in the queue

Goal is to measure how many tokens are waiting in the queue. Ideally this would avoid double counting when prefix caching is enabled.

Motivation to add this metric is I am currently prototyping an autoscaling strategy that takes sum of (total tokens in the kv cache + total tokens in the queue) into account.

Seeking feedback on if I've implemented this tokens in the queue metric appropriately, I attempted to duplicate logic related to the scheduler's decision to turn status from WAITING -> RUNNING

cc @robertgshaw2-redhat who has experience with tokens in the batch metric
cc @apatke who implemented priority scheduling logic for which part of tokens in queue is based on
cc @rkooo567 who implemented refactor for chunked prefill which changed part of scheduler decision to remove requests from the queue
cc @youkaichao who simplified scheduler code

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Signed-off-by: Anna Pendleton <[email protected]>
@robertgshaw2-redhat
Copy link
Collaborator

This could impact performance in a negative way when the number of requests in the waiting queue grows large. This list can grow arbitrarily large (e.g. if the user submits 20000 requests in batch mode, the list will be 20k in length). We should not iterate list in each engine step

@annapendleton
Copy link
Author

annapendleton commented Jan 23, 2025

This could impact performance in a negative way when the number of requests in the waiting queue grows large. This list can grow arbitrarily large (e.g. if the user submits 20000 requests in batch mode, the list will be 20k in length). We should not iterate list in each engine step

That makes sense to me, I think one option we have is to store and update a variable in the scheduler (eg. scheduler.py) and emit that variable as the current metric. So rather than adding an additional for loop, we take advantage of the required iteration via the scheduler.

Note that we already have iteration logic through the requests in the waiting for for the existing num_prompt_tokens and num_generations_tokens metrics here, wondering if there's goals to improve these metrics in the future as well 🤔 :

for idx, scheduled_seq_group in enumerate(

@annapendleton
Copy link
Author

Update from chat in slack, ty @robertgshaw2-redhat for the feedback! I'll update the logic to remove the for loop and store a var in scheduler.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants