Skip to content

Conversation

seratch
Copy link
Member

@seratch seratch commented Aug 28, 2025

This pull request improves the interoperability with LiteLLM for structured output use cases (at least with Claude models). I found that LiteLLM could append non-existent tool called "json_tool_call" to handle JSON schema response format. Currently, Agents SDK raises an exception telling the "json_tool_call" tool does not exist, but allowing this tool use simply enables the use case without any issues. I think it's okay for us to add additional logic like this PR does. It's safe and does not break anything.

@seratch seratch requested a review from rm-openai August 28, 2025 09:51
@seratch seratch added enhancement New feature or request feature:lite-llm labels Aug 28, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Codex Review: Here are some suggestions.

Reply with @codex fix comments to fix any unresolved comments.

About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".

Comment on lines 509 to +522
# Regular function tool call
else:
if output.name not in function_map:
_error_tracing.attach_error_to_current_span(
SpanError(
message="Tool not found",
data={"tool_name": output.name},
if output_schema is not None and output.name == "json_tool_call":
# LiteLLM could generate non-existent tool calls for structured outputs
items.append(ToolCallItem(raw_item=output, agent=agent))
functions.append(
ToolRunFunction(
tool_call=output,
# this tool does not exist in function_map, so generate ad-hoc one,
# which just parses the input if it's a string, and returns the
# value otherwise
function_tool=_build_litellm_json_tool_call(output),
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[P1] json_tool_call result never returned as final output

The new branch allows a LiteLLM json_tool_call to execute by synthesizing a dummy FunctionTool, but nothing marks that tool result as the final agent output. With the default tool_use_behavior="run_llm_again" (see _check_for_final_output_from_tools), the runner will always invoke the model again after the synthetic tool runs. LiteLLM treats json_tool_call itself as the structured response and does not send a follow‑up message, so the agent re-enters the loop until it hits MaxTurnsExceeded and the parsed JSON is never surfaced to the caller. Consider short‑circuiting to a final output when this special tool is encountered (or automatically switching the tool use behavior) so the structured output can be returned instead of triggering another model turn.

Useful? React with 👍 / 👎.

@seratch seratch merged commit 3b36fd9 into main Aug 29, 2025
5 checks passed
@seratch seratch deleted the litellm-structured-output branch August 29, 2025 00:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature:lite-llm
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants