Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Tool description exceeds maximum length of 1024 characters. Please shorten your description or move it to the prompt. #17858

Open
danerlt opened this issue Feb 19, 2025 · 3 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@danerlt
Copy link

danerlt commented Feb 19, 2025

Bug Description

When I use the workflow and use the tool, it prompts me: Tool description exceeds maximum length of 1024 characters. Please shorten your description or move it to the prompt.

Version

0.12.19

Steps to Reproduce

When I run the workflow and use a tool with a very long description, an error occurs:Tool description exceeds maximum length of 1024 characters. Please shorten your description or move it to the prompt.

I see that to_openai_tool has a skip_length_check parameter that can skip the length check.

    def to_openai_tool(self, skip_length_check: bool = False) -> Dict[str, Any]:
        """To OpenAI tool."""
        if not skip_length_check and len(self.description) > 1024:
            raise ValueError(
                "Tool description exceeds maximum length of 1024 characters. "
                "Please shorten your description or move it to the prompt."
            )
        return {
            "type": "function",
            "function": {
                "name": self.name,
                "description": self.description,
                "parameters": self.get_parameters_dict(),
            },
        }

However, in OpenAI's _prepare_chat_with_tools method, when calling to_openai_tool, this parameter is not passed.

class OpenAI(FunctionCallingLLM):

    def _prepare_chat_with_tools(
        self,
        tools: Sequence["BaseTool"],
        user_msg: Optional[Union[str, ChatMessage]] = None,
        chat_history: Optional[List[ChatMessage]] = None,
        verbose: bool = False,
        allow_parallel_tool_calls: bool = False,
        tool_choice: Union[str, dict] = "auto",
        strict: Optional[bool] = None,
        **kwargs: Any,
    ) -> Dict[str, Any]:
        """Predict and call the tool."""
        tool_specs = [tool.metadata.to_openai_tool() for tool in tools]

        # if strict is passed in, use, else default to the class-level attribute, else default to True`
        if strict is not None:
            strict = strict
        else:
            strict = self.strict

        if self.metadata.is_function_calling_model:
            for tool_spec in tool_specs:
                if tool_spec["type"] == "function":
                    tool_spec["function"]["strict"] = strict
                    # in current openai 1.40.0 it is always false.
                    tool_spec["function"]["parameters"]["additionalProperties"] = False

        if isinstance(user_msg, str):
            user_msg = ChatMessage(role=MessageRole.USER, content=user_msg)

        messages = chat_history or []
        if user_msg:
            messages.append(user_msg)

        return {
            "messages": messages,
            "tools": tool_specs or None,
            "tool_choice": resolve_tool_choice(tool_choice) if tool_specs else None,
            **kwargs,
        }

I hope that when calling achat_with_tools, skip_length_check can be passed in.

Relevant Logs/Tracbacks

Traceback (most recent call last):
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\core\workflow\workflow.py", line 270, in _task
    new_ev = await instrumented_step(**kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 368, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\src\job_assistant\task\agent\agent.py", line 209, in handle_llm_input
    response = await self.llm.achat_with_tools(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\core\llms\function_calling.py", line 65, in achat_with_tools
    chat_kwargs = self._prepare_chat_with_tools(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\llms\openai\base.py", line 906, in _prepare_chat_with_tools
    tool_specs = [tool.metadata.to_openai_tool() for tool in tools]
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\work\incremental-learning\api\.venv\Lib\site-packages\llama_index\core\tools\types.py", line 77, in to_openai_tool
    raise ValueError(
ValueError: Tool description exceeds maximum length of 1024 characters. Please shorten your description or move it to the prompt.
@danerlt danerlt added bug Something isn't working triage Issue needs to be triaged/prioritized labels Feb 19, 2025
Copy link

dosubot bot commented Feb 19, 2025

The _prepare_chat_with_tools method in the OpenAI class already includes the skip_length_check parameter when calling to_openai_tool. The method should look like this:

tool_specs = [
    tool.metadata.to_openai_tool(skip_length_check=True) for tool in tools
]

This line of code ensures that the skip_length_check=True argument is passed to the to_openai_tool method for each tool in the tools list, allowing you to bypass the length check for tool descriptions [1]. If this is not reflected in your current version, you might need to update your code to include this parameter.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@logan-markewich
Copy link
Collaborator

Are you using openai? It will actually break their api if that request is made. Openai has this limitation

@danerlt
Copy link
Author

danerlt commented Feb 21, 2025

@logan-markewich No, the LLM I use is of the OpenAILike class.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants