-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Handle LiteLLM's json_tool_call addition for structured outputs #1602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codex Review: Here are some suggestions.
Reply with @codex fix comments
to fix any unresolved comments.
About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".
# Regular function tool call | ||
else: | ||
if output.name not in function_map: | ||
_error_tracing.attach_error_to_current_span( | ||
SpanError( | ||
message="Tool not found", | ||
data={"tool_name": output.name}, | ||
if output_schema is not None and output.name == "json_tool_call": | ||
# LiteLLM could generate non-existent tool calls for structured outputs | ||
items.append(ToolCallItem(raw_item=output, agent=agent)) | ||
functions.append( | ||
ToolRunFunction( | ||
tool_call=output, | ||
# this tool does not exist in function_map, so generate ad-hoc one, | ||
# which just parses the input if it's a string, and returns the | ||
# value otherwise | ||
function_tool=_build_litellm_json_tool_call(output), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[P1] json_tool_call result never returned as final output
The new branch allows a LiteLLM json_tool_call
to execute by synthesizing a dummy FunctionTool
, but nothing marks that tool result as the final agent output. With the default tool_use_behavior="run_llm_again"
(see _check_for_final_output_from_tools
), the runner will always invoke the model again after the synthetic tool runs. LiteLLM treats json_tool_call
itself as the structured response and does not send a follow‑up message, so the agent re-enters the loop until it hits MaxTurnsExceeded
and the parsed JSON is never surfaced to the caller. Consider short‑circuiting to a final output when this special tool is encountered (or automatically switching the tool use behavior) so the structured output can be returned instead of triggering another model turn.
Useful? React with 👍 / 👎.
This pull request improves the interoperability with LiteLLM for structured output use cases (at least with Claude models). I found that LiteLLM could append non-existent tool called "json_tool_call" to handle JSON schema response format. Currently, Agents SDK raises an exception telling the "json_tool_call" tool does not exist, but allowing this tool use simply enables the use case without any issues. I think it's okay for us to add additional logic like this PR does. It's safe and does not break anything.