Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Providing tool response back to llm for output generation is broken for llama3.1 8B #2542

Open
3 tasks done
S1LV3RJ1NX opened this issue Sep 30, 2024 · 2 comments
Open
3 tasks done
Assignees

Comments

@S1LV3RJ1NX
Copy link

S1LV3RJ1NX commented Sep 30, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

I am using lang graph for tool calling with llama-3.1 8B Instruct model.
The model responses work well until tool calling section. It gives an internal server error when the tool response is sent back to llm for producing an output.

Reproduction

Steps to reproduce:

Attaching the code snippet as an example

from langchain_openai import ChatOpenAI
model = ChatOpenAI(
    model = model_name,
    temperature = 0,
    base_url=model_url,
    api_key="EMPTY",
    disable_streaming=True,
)

# Define tools
from langchain_core.tools import tool
from typing import Literal
@tool
def get_weather(city: Literal["nyc", "sf"]):
    """Use this to get weather information."""
    if city == "nyc":
        return "It might be cloudy in nyc"
    elif city == "sf":
        return "It's always sunny in sf"
    else:
        raise AssertionError("Unknown city")


tools = [get_weather]

# Define the graph
from langgraph.prebuilt import create_react_agent
graph = create_react_agent(model, tools=tools)

# Printing
def print_stream(stream):
    for s in stream:
        message = s["messages"][-1]
        if isinstance(message, tuple):
            print(message)
        else:
            message.pretty_print()

# Run:
inputs = {"messages": [("user", "what is the weather in sf")]}
print_stream(graph.stream(inputs, stream_mode="values"))

# OUTPUT:
================================ Human Message =================================

what is the weather in sf
================================== Ai Message ==================================

<function=get_weather>{"city": "sf"}</function>
Tool Calls:
  get_weather (0)
 Call ID: 0
  Args:
    city: sf
================================= Tool Message =================================
Name: get_weather

It's always sunny in sf
------ Server Error ---

Environment

Docker image: openmmlab/lmdeploy:v0.6.1-cu11

Command: lmdeploy serve api_server --server-port 8000  meta-llama/Llama-3.1-8B-Instruct

Error traceback

September 30 19:41:33.390
Traceback (most recent call last):
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 406, in run_asgi
September 30 19:41:33.390
    result = await app(  # type: ignore[func-returns-value]
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
September 30 19:41:33.390
    return await self.app(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
September 30 19:41:33.390
    await super().__call__(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
September 30 19:41:33.390
    await self.middleware_stack(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
September 30 19:41:33.390
    raise exc
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
September 30 19:41:33.390
    await self.app(scope, receive, _send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
September 30 19:41:33.390
    await self.app(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
September 30 19:41:33.390
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
September 30 19:41:33.390
    raise exc
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
September 30 19:41:33.390
    await app(scope, receive, sender)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
September 30 19:41:33.390
    await self.middleware_stack(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
September 30 19:41:33.390
    await route.handle(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
September 30 19:41:33.390
    await self.app(scope, receive, send)
September 30 19:41:33.390
  File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
@AllentDan
Copy link
Collaborator

Hi, @S1LV3RJ1NX may check if #2558 meets your need.

@AllentDan
Copy link
Collaborator

Similar issue #2366

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants