Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v1][stats][1/n] Add RequestStatsUpdate and RequestStats types #10907

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

rickyyx
Copy link
Contributor

@rickyyx rickyyx commented Dec 4, 2024

This PR adds the basic data structures to capture request stats change and their lifecycle transitions.

Intended usage in follow-up PRs

  • The RequestStatsUpdate would be recorded on both the engine core as well as the frontend engine (async multiprocess architecture) process as request goes through various stages (i.e. input processing, prefill, decode, finished, etc). If it's single process architecture, they would be on the same process.
  • Frontend engine would periodically poll the request updates, and materialize them into RequestStats for each request, and build a loggable Stats object each iteration.

See full prototype here in #10651

Addresses #10582

Signed-off-by: rickyx <[email protected]>
Copy link

github-actions bot commented Dec 4, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Signed-off-by: rickyx <[email protected]>
Signed-off-by: rickyx <[email protected]>
@WoosukKwon
Copy link
Collaborator

@robertgshaw2-neuralmagic Do you have bandwidth to review this PR by any chance?

@robertgshaw2-neuralmagic
Copy link
Collaborator

@robertgshaw2-neuralmagic Do you have bandwidth to review this PR by any chance?

Yeah

Copy link
Collaborator

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks really good. My thoughts are:

  • Preemption: I do not think that the way we are handling preemption is consistent with what we should be exposing to users (see comments in the code for why)

  • Detokenization / Finished: I think that tracking these data are a bit complicated since they are not in the EngineCore and the Detokenizer runs concurrently with the EngineCore. I am curious to get your thoughts around from which object's POV we should consider a request "finished" and how/if we should include detokenization time in the TPOT metrics given the V1 design.

@rickyyx
Copy link
Contributor Author

rickyyx commented Dec 11, 2024

Generally looks really good. My thoughts are:

  • Preemption: I do not think that the way we are handling preemption is consistent with what we should be exposing to users (see comments in the code for why)
  • Detokenization / Finished: I think that tracking these data are a bit complicated since they are not in the EngineCore and the Detokenizer runs concurrently with the EngineCore. I am curious to get your thoughts around from which object's POV we should consider a request "finished" and how/if we should include detokenization time in the TPOT metrics given the V1 design.

Thanks for the great review @robertgshaw2-neuralmagic!
For preemption:

  1. model forward time /model execute time should be accumulative across preemptions as you pointed out.
  2. same for output token times.

For detokenization stage and finish case:

  1. I think it might be better to mark a request from the PoV of the EngineCore only by sending the finish reason in the abort request RPC.
  2. For detokenization time, this is a great point, TPOT should include detokenization time actually, and being recorded on the detokenizer.

Signed-off-by: rickyx <[email protected]>
Signed-off-by: rickyx <[email protected]>
@rickyyx
Copy link
Contributor Author

rickyyx commented Dec 12, 2024

Updates:

  • Preemption should not clear some of the fields (e.g. output tokens ts)
  • Add an explicit FINISH update to be used by caller (i.e. detokenizer)
  • Clarify the running states into PREFILLING and DECODING more explicitly.

Signed-off-by: rickyx <[email protected]>
@comaniac
Copy link
Collaborator

As we are adding more supports to caching (MM preprocessor cache, prefix caching, and potentially embedding caching), it would be useful to track cache status in metrics, so would like to bring the metric system to v1 asap if possible.

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 11, 2025
@robertgshaw2-neuralmagic robertgshaw2-neuralmagic enabled auto-merge (squash) January 11, 2025 17:36
Copy link
Collaborator

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xxx

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic enabled auto-merge (squash) January 11, 2025 19:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants