Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] [2/n] Logging and Metrics - OutputProcessor Abstraction #11973

Open
wants to merge 69 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
cfa8c2b
added code
robertgshaw2-neuralmagic Jan 11, 2025
6d8e4f3
fixed
robertgshaw2-neuralmagic Jan 11, 2025
c78a56f
fixed
robertgshaw2-neuralmagic Jan 11, 2025
7b39705
updated
robertgshaw2-neuralmagic Jan 11, 2025
6e9cd1c
updated
robertgshaw2-neuralmagic Jan 11, 2025
2657b7f
fixed
robertgshaw2-neuralmagic Jan 11, 2025
249b9ff
updated
robertgshaw2-neuralmagic Jan 11, 2025
c1f9292
refactoring metrics
robertgshaw2-neuralmagic Jan 11, 2025
c641866
updated
robertgshaw2-neuralmagic Jan 12, 2025
1ce7a5f
updated
robertgshaw2-neuralmagic Jan 12, 2025
c1baa6d
Merge branch 'v1-metrics' into v1-metrics-2
robertgshaw2-neuralmagic Jan 12, 2025
f8de299
added output processor
robertgshaw2-neuralmagic Jan 12, 2025
49ca9bb
added all files
robertgshaw2-neuralmagic Jan 12, 2025
86d33a1
stash
robertgshaw2-neuralmagic Jan 12, 2025
4066fc8
working again
robertgshaw2-neuralmagic Jan 12, 2025
5ef374c
Merge branch 'v1-metrics' into v1-metrics-2
robertgshaw2-neuralmagic Jan 12, 2025
c9ffc60
fixed sorting
robertgshaw2-neuralmagic Jan 12, 2025
5f3f3b7
Merge branch 'main' into v1-metrics-2
robertgshaw2-neuralmagic Jan 12, 2025
e34b9dc
merged
robertgshaw2-neuralmagic Jan 12, 2025
dd6e3d6
reduce number of changes
robertgshaw2-neuralmagic Jan 12, 2025
dbd86b8
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
ebf3530
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
7b6d9b3
updared
robertgshaw2-neuralmagic Jan 12, 2025
707796f
make pr more reviewable
robertgshaw2-neuralmagic Jan 12, 2025
df72c8f
update comments
robertgshaw2-neuralmagic Jan 12, 2025
9d67efc
make PR more readable
robertgshaw2-neuralmagic Jan 12, 2025
1cae783
reduce cruft
robertgshaw2-neuralmagic Jan 12, 2025
6401cfa
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
33bc01d
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
7dda305
updated
robertgshaw2-neuralmagic Jan 12, 2025
769cff5
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
b1b4c47
minor cleanups
robertgshaw2-neuralmagic Jan 12, 2025
2f916d1
clean up
robertgshaw2-neuralmagic Jan 12, 2025
6a5f245
updated
robertgshaw2-neuralmagic Jan 12, 2025
9ea36c8
updated
robertgshaw2-neuralmagic Jan 12, 2025
318c203
reduce changes
robertgshaw2-neuralmagic Jan 12, 2025
3746183
reduce LOC changes
robertgshaw2-neuralmagic Jan 12, 2025
449405b
updated
robertgshaw2-neuralmagic Jan 12, 2025
79f2f5f
remove file
robertgshaw2-neuralmagic Jan 12, 2025
a16d27f
updated
robertgshaw2-neuralmagic Jan 12, 2025
19372f9
reduce LOC changes
robertgshaw2-neuralmagic Jan 12, 2025
39be503
updated
robertgshaw2-neuralmagic Jan 12, 2025
833f028
updated
robertgshaw2-neuralmagic Jan 12, 2025
ef2c3f9
updated
robertgshaw2-neuralmagic Jan 12, 2025
33303fc
updated
robertgshaw2-neuralmagic Jan 12, 2025
edae5d2
updated
robertgshaw2-neuralmagic Jan 12, 2025
a20c7b5
updated
robertgshaw2-neuralmagic Jan 12, 2025
b7e5a91
updated
robertgshaw2-neuralmagic Jan 12, 2025
9353010
fixed
robertgshaw2-neuralmagic Jan 12, 2025
94de9f5
cleanup
robertgshaw2-neuralmagic Jan 12, 2025
2ea4283
revert abort test
robertgshaw2-neuralmagic Jan 12, 2025
b9683d1
updared
robertgshaw2-neuralmagic Jan 12, 2025
92c3b0c
stash
robertgshaw2-neuralmagic Jan 12, 2025
a985a73
added logging and comment
robertgshaw2-neuralmagic Jan 12, 2025
6c36d87
starting to fix tests - stash
robertgshaw2-neuralmagic Jan 12, 2025
595fd12
updated tests
robertgshaw2-neuralmagic Jan 12, 2025
5ecfe8e
make tests pass
robertgshaw2-neuralmagic Jan 12, 2025
5f37918
reduce LOC changes
robertgshaw2-neuralmagic Jan 12, 2025
1d9b233
updated
robertgshaw2-neuralmagic Jan 12, 2025
2880962
added IterationStats test
robertgshaw2-neuralmagic Jan 13, 2025
7de7c00
codespell
robertgshaw2-neuralmagic Jan 13, 2025
eec573c
add comment about invairant
robertgshaw2-neuralmagic Jan 13, 2025
0427e03
updated
robertgshaw2-neuralmagic Jan 13, 2025
9b49133
tweak
robertgshaw2-neuralmagic Jan 13, 2025
bffa5d0
formatting and added test
robertgshaw2-neuralmagic Jan 13, 2025
605c5f0
passing
robertgshaw2-neuralmagic Jan 13, 2025
d0013a4
ruff ruff
robertgshaw2-neuralmagic Jan 13, 2025
e01d236
format
robertgshaw2-neuralmagic Jan 13, 2025
a53f089
run isort
robertgshaw2-neuralmagic Jan 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
updated
  • Loading branch information
robertgshaw2-neuralmagic committed Jan 12, 2025
commit ef2c3f9e18988f444a88dbd85cb65b3d68ce9b56
9 changes: 3 additions & 6 deletions vllm/v1/engine/async_llm.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import asyncio
import os
from typing import AsyncGenerator, Dict, List, Mapping, Optional, Type, Union

Check failure on line 3 in vllm/v1/engine/async_llm.py

View workflow job for this annotation

GitHub Actions / ruff (3.12)

Ruff (F401)

vllm/v1/engine/async_llm.py:3:36: F401 `typing.Dict` imported but unused

from vllm.config import ModelConfig, VllmConfig
from vllm.engine.arg_utils import AsyncEngineArgs
Expand All @@ -20,7 +20,7 @@
from vllm.v1.engine.core_client import EngineCoreClient
from vllm.v1.engine.output_processor import OutputProcessor
from vllm.v1.engine.processor import Processor
from vllm.v1.engine.request_state import RequestState

Check failure on line 23 in vllm/v1/engine/async_llm.py

View workflow job for this annotation

GitHub Actions / ruff (3.12)

Ruff (F401)

vllm/v1/engine/async_llm.py:23:42: F401 `vllm.v1.engine.request_state.RequestState` imported but unused
from vllm.v1.executor.abstract import Executor
from vllm.v1.metrics.loggers import LoggingStatLogger, StatLoggerBase
from vllm.v1.metrics.stats import IterationStats, SchedulerStats
Expand Down Expand Up @@ -235,19 +235,17 @@
engine_core_outputs = await self.engine_core.get_output_async()

# 2) Process EngineCoreOutputs.
processed_outputs = self.output_processor.process_outputs(
engine_core_outputs, self.request_states)
outputs = self.output_processor.process_outputs(engine_core_outputs)

Check failure on line 238 in vllm/v1/engine/async_llm.py

View workflow job for this annotation

GitHub Actions / ruff (3.12)

Ruff (E501)

vllm/v1/engine/async_llm.py:238:81: E501 Line too long (84 > 80)

# 3) Abort any reqs that finished due to stop strings.
await self.engine_core.abort_requests_async(
processed_outputs.reqs_to_abort)
await self.engine_core.abort_requests_async(outputs.reqs_to_abort)

Check failure on line 241 in vllm/v1/engine/async_llm.py

View workflow job for this annotation

GitHub Actions / ruff (3.12)

Ruff (E501)

vllm/v1/engine/async_llm.py:241:81: E501 Line too long (82 > 80)

# 4) Logging.
# TODO(rob): make into a coroutine and launch it in
# background thread once we add Prometheus.
self._log_stats(
scheduler_stats=engine_core_outputs.scheduler_stats,
iteration_stats=processed_outputs.iteration_stats,
iteration_stats=outputs.iteration_stats,
)

except Exception as e:
Expand All @@ -261,7 +259,6 @@
await self.engine_core.abort_requests_async(request_ids)
self.output_processor.abort_requests(request_ids)


def _log_stats(
self,
scheduler_stats: SchedulerStats,
Expand Down
5 changes: 2 additions & 3 deletions vllm/v1/engine/output_processor.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,6 @@ def make_request_output(
def process_outputs(
self,
outputs: EngineCoreOutputs,
request_states: Dict[str, RequestState],
) -> OutputProcessorOutput:
"""
Process the EngineCoreOutputs:
Expand Down Expand Up @@ -146,7 +145,7 @@ def process_outputs(
iteration_stats = IterationStats(self.log_stats)
for engine_core_output in outputs.outputs:
req_id = engine_core_output.request_id
req_state = request_states.get(req_id)
req_state = self.request_states.get(req_id)
if req_state is None:
# Ignore output for already-aborted request.
continue
Expand All @@ -173,7 +172,7 @@ def process_outputs(

# Free completed requests.
if request_output.finished:
request_states.pop(req_id)
self.request_states.pop(req_id)
if not engine_core_output.finished:
# If req not finished in EngineCore, but Detokenizer
# detected stop string, abort needed in EngineCore.
Expand Down
Loading