Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Goose is so slow I thought it was broken for the simplest (suggested) prompt #1463

Open
amayers opened this issue Mar 3, 2025 · 2 comments
Labels
clarification provide explanation for goose's behavior

Comments

@amayers
Copy link

amayers commented Mar 3, 2025

Describe the bug

Goose takes 15 seconds to respond to the simplest question (what can you do).

To Reproduce
Steps to reproduce the behavior:

  1. Install the app for the first time
  2. Click the sample query what can you do
  3. Get dumped into the browser with no explication, and asked to auth with some other service (not Goose)
  4. If you abort that auth, Goose gets stuck in Address already in use (os error 48) #704
  5. If you complete the auth, Goose spins for 15 seconds to answer the simple tutorial question

Expected behavior
I downloaded the Goose app, I'd expect it's response to be instant for something simple like that. If I was running Goose in a browser I'd be more understanding of it being slow since it's running on some busy server somewhere. But for a local app that doesn't need to interact with a remote API, I expect instant for simple things.

Screenshots

Screen.Recording.2025-03-03.at.9.10.21.AM.mov

Please provide following information:

  • OS & Arch: macOS 15, M3
  • Interface: UI
  • Version: 1.0.9-block.202502252251-e51b7 (1.0.9-block.202502252251-e51b7)
  • Extensions enabled: ? Stock macOS Goose download with no settings changed
  • Provider & Model: ? Stock macOS Goose download with no settings changed
@lily-de
Copy link
Collaborator

lily-de commented Mar 3, 2025

Hi there! Thanks for the feedback -- I can give a bit of information as to why goose feels slow and what we are doing to improve that, but open to other ideas!

  1. Latency due to message conversion from rust backend to react frontend. We do not ping the LLM directly from the frontend, it calls an endpoint in the backend, which in turn calls the LLM and returns a response. This request + transformation step might be more involved than when you use chatgpt or cursor
  2. Other LLM-backed chat/agent experiences involve showing the respinse word by word, so you see the response as it is being written, in Goose we current wait for the full response from the LLM and then show it to the user, which may make it feel like a longer wait

We are planning to tackle (2) in some future work but it hasn't been prioritized because we had to do some refactoring of how the frontend and backend interact first. We're hoping to start to show goose's responses as the LLM responds, if not word-by-word at least in chunks.

@lily-de lily-de added the clarification provide explanation for goose's behavior label Mar 3, 2025
@amayers
Copy link
Author

amayers commented Mar 3, 2025

  1. It doesn't seem like a simple local frontend to backend communication would be this noticeable. It still seems like it should be <100 ms.
  2. I've never used another LLM so I don't know how they feel. But for simple search type requests, I expect that type of request to be as fast as DuckDuckGo or Spotlight. If a search engine that I access remotely can respond in well under 1 second, I expect a local Mac app to be comparable, unless it's having to do a ton of file system access. Then a few seconds like Spotlight seems normal. I don't see how changing it to reply word by word to make it feel faster is any better, since you are still waiting for 15 seconds for a simple ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clarification provide explanation for goose's behavior
Projects
None yet
Development

No branches or pull requests

2 participants