Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

安装问题 #1038

Open
tsh2018 opened this issue Dec 21, 2024 · 1 comment
Open

安装问题 #1038

tsh2018 opened this issue Dec 21, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@tsh2018
Copy link

tsh2018 commented Dec 21, 2024

After starting the container and accessing the UI interface, an error is reported.

image
image

log error:

2024-12-21 13:51:44 INFO: Started server process [8]
2024-12-21 13:51:44 INFO: Waiting for application startup.
2024-12-21 13:51:44 2024-12-21 05:51:44,318 - wren-ai-service - INFO - Imported Provider: src.providers.document_store (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,958 - wren-ai-service - INFO - Registering provider: openai_embedder (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,958 - wren-ai-service - INFO - Registering provider: qdrant (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,958 - wren-ai-service - INFO - Imported Provider: src.providers.document_store.qdrant (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,958 - wren-ai-service - INFO - Imported Provider: src.providers.embedder (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,959 - wren-ai-service - INFO - Registering provider: azure_openai_embedder (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,959 - wren-ai-service - INFO - Imported Provider: src.providers.embedder.azure_openai (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,962 - wren-ai-service - INFO - Registering provider: ollama_embedder (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,962 - wren-ai-service - INFO - Imported Provider: src.providers.embedder.ollama (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,962 - wren-ai-service - INFO - Imported Provider: src.providers.embedder.openai (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,962 - wren-ai-service - INFO - Imported Provider: src.providers.engine (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,964 - wren-ai-service - INFO - Registering provider: wren_ui (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,964 - wren-ai-service - INFO - Registering provider: wren_ibis (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,964 - wren-ai-service - INFO - Registering provider: wren_engine (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,964 - wren-ai-service - INFO - Imported Provider: src.providers.engine.wren (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,966 - wren-ai-service - INFO - Imported Provider: src.providers.llm (loader.py:42)
2024-12-21 13:51:44 2024-12-21 05:51:44,980 - wren-ai-service - INFO - Registering provider: azure_openai_llm (loader.py:66)
2024-12-21 13:51:44 2024-12-21 05:51:44,980 - wren-ai-service - INFO - Imported Provider: src.providers.llm.azure_openai (loader.py:42)
2024-12-21 13:51:47 2024-12-21 05:51:47,683 - wren-ai-service - INFO - Registering provider: litellm_llm (loader.py:66)
2024-12-21 13:51:47 2024-12-21 05:51:47,684 - wren-ai-service - INFO - Imported Provider: src.providers.llm.litellm (loader.py:42)
2024-12-21 13:51:47 2024-12-21 05:51:47,689 - wren-ai-service - INFO - Registering provider: ollama_llm (loader.py:66)
2024-12-21 13:51:47 2024-12-21 05:51:47,689 - wren-ai-service - INFO - Imported Provider: src.providers.llm.ollama (loader.py:42)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - Registering provider: openai_llm (loader.py:66)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - Imported Provider: src.providers.llm.openai (loader.py:42)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - Imported Provider: src.providers.loader (loader.py:42)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - initializing provider: openai_embedder (init.py:18)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - Getting provider: openai_embedder from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} (loader.py:93)
2024-12-21 13:51:47 2024-12-21 05:51:47,785 - wren-ai-service - INFO - Initializing OpenAIEmbedder provider with API base: https://api.openai.com/v1 (openai.py:203)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - Using OpenAI Embedding Model: text-embedding-3-large (openai.py:207)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - initializing provider: litellm_llm (init.py:18)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} (loader.py:93)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - initializing provider: litellm_llm (init.py:18)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - Getting provider: litellm_llm from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} (loader.py:93)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - initializing provider: qdrant (init.py:18)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - Getting provider: qdrant from {'openai_embedder': <class 'src.providers.embedder.openai.OpenAIEmbedderProvider'>, 'qdrant': <class 'src.providers.document_store.qdrant.QdrantProvider'>, 'azure_openai_embedder': <class 'src.providers.embedder.azure_openai.AzureOpenAIEmbedderProvider'>, 'ollama_embedder': <class 'src.providers.embedder.ollama.OllamaEmbedderProvider'>, 'wren_ui': <class 'src.providers.engine.wren.WrenUI'>, 'wren_ibis': <class 'src.providers.engine.wren.WrenIbis'>, 'wren_engine': <class 'src.providers.engine.wren.WrenEngine'>, 'azure_openai_llm': <class 'src.providers.llm.azure_openai.AzureOpenAILLMProvider'>, 'litellm_llm': <class 'src.providers.llm.litellm.LitellmLLMProvider'>, 'ollama_llm': <class 'src.providers.llm.ollama.OllamaLLMProvider'>, 'openai_llm': <class 'src.providers.llm.openai.OpenAILLMProvider'>} (loader.py:93)
2024-12-21 13:51:47 2024-12-21 05:51:47,786 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 3072 (qdrant.py:370)
2024-12-21 13:51:47 ERROR: Traceback (most recent call last):
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
2024-12-21 13:51:47 yield
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 236, in handle_request
2024-12-21 13:51:47 resp = self._pool.handle_request(req)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
2024-12-21 13:51:47 raise exc from None
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
2024-12-21 13:51:47 response = connection.handle_request(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
2024-12-21 13:51:47 raise exc
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 78, in handle_request
2024-12-21 13:51:47 stream = self._connect(request)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 124, in _connect
2024-12-21 13:51:47 stream = self._network_backend.connect_tcp(**kwargs)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 207, in connect_tcp
2024-12-21 13:51:47 with map_exceptions(exc_map):
2024-12-21 13:51:47 File "/usr/local/lib/python3.12/contextlib.py", line 155, in exit
2024-12-21 13:51:47 self.gen.throw(value)
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
2024-12-21 13:51:47 raise to_exc(exc) from exc
2024-12-21 13:51:47 httpcore.ConnectError: [Errno 111] Connection refused
2024-12-21 13:51:47
2024-12-21 13:51:47 The above exception was the direct cause of the following exception:
2024-12-21 13:51:47
2024-12-21 13:51:47 Traceback (most recent call last):
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api_client.py", line 106, in send_inner
2024-12-21 13:51:47 response = self._client.send(request)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 926, in send
2024-12-21 13:51:47 response = self._send_handling_auth(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 954, in _send_handling_auth
2024-12-21 13:51:47 response = self._send_handling_redirects(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
2024-12-21 13:51:47 response = self._send_single_request(request)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1027, in _send_single_request
2024-12-21 13:51:47 response = transport.handle_request(request)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 235, in handle_request
2024-12-21 13:51:47 with map_httpcore_exceptions():
2024-12-21 13:51:47 File "/usr/local/lib/python3.12/contextlib.py", line 155, in exit
2024-12-21 13:51:47 self.gen.throw(value)
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
2024-12-21 13:51:47 raise mapped_exc(message) from exc
2024-12-21 13:51:47 httpx.ConnectError: [Errno 111] Connection refused
2024-12-21 13:51:47
2024-12-21 13:51:47 During handling of the above exception, another exception occurred:
2024-12-21 13:51:47
2024-12-21 13:51:47 Traceback (most recent call last):
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
2024-12-21 13:51:47 async with self.lifespan_context(app) as maybe_state:
2024-12-21 13:51:47 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2024-12-21 13:51:47 return await anext(self.gen)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
2024-12-21 13:51:47 async with original_context(app) as maybe_original_state:
2024-12-21 13:51:47 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2024-12-21 13:51:47 return await anext(self.gen)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
2024-12-21 13:51:47 async with original_context(app) as maybe_original_state:
2024-12-21 13:51:47 File "/usr/local/lib/python3.12/contextlib.py", line 204, in aenter
2024-12-21 13:51:47 return await anext(self.gen)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/src/main.py", line 30, in lifespan
2024-12-21 13:51:47 pipe_components = generate_components(settings.components)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/src/providers/init.py", line 396, in generate_components
2024-12-21 13:51:47 identifier: provider_factory(config)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/src/providers/init.py", line 19, in provider_factory
2024-12-21 13:51:47 return loader.get_provider(config.get("provider"))(**config)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/src/providers/document_store/qdrant.py", line 358, in init
2024-12-21 13:51:47 self._reset_document_store(recreate_index)
2024-12-21 13:51:47 File "/src/providers/document_store/qdrant.py", line 361, in _reset_document_store
2024-12-21 13:51:47 self.get_store(recreate_index=recreate_index)
2024-12-21 13:51:47 File "/src/providers/document_store/qdrant.py", line 374, in get_store
2024-12-21 13:51:47 return AsyncQdrantDocumentStore(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/src/providers/document_store/qdrant.py", line 161, in init
2024-12-21 13:51:47 self.client.create_payload_index(
2024-12-21 13:51:47 ^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/haystack_integrations/document_stores/qdrant/document_store.py", line 280, in client
2024-12-21 13:51:47 self._set_up_collection(
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/haystack_integrations/document_stores/qdrant/document_store.py", line 857, in _set_up_collection
2024-12-21 13:51:47 if recreate_collection or not self.client.collection_exists(collection_name):
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/qdrant_client.py", line 2097, in collection_exists
2024-12-21 13:51:47 return self._client.collection_exists(collection_name=collection_name, **kwargs)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/qdrant_remote.py", line 2623, in collection_exists
2024-12-21 13:51:47 result: Optional[models.CollectionExistence] = self.http.collections_api.collection_exists(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api/collections_api.py", line 1157, in collection_exists
2024-12-21 13:51:47 return self._build_for_collection_exists(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api/collections_api.py", line 87, in build_for_collection_exists
2024-12-21 13:51:47 return self.api_client.request(
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api_client.py", line 79, in request
2024-12-21 13:51:47 return self.send(request, type
)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api_client.py", line 96, in send
2024-12-21 13:51:47 response = self.middleware(request, self.send_inner)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api_client.py", line 205, in call
2024-12-21 13:51:47 return call_next(request)
2024-12-21 13:51:47 ^^^^^^^^^^^^^^^^^^
2024-12-21 13:51:47 File "/app/.venv/lib/python3.12/site-packages/qdrant_client/http/api_client.py", line 108, in send_inner
2024-12-21 13:51:47 raise ResponseHandlingException(e)
2024-12-21 13:51:47 qdrant_client.http.exceptions.ResponseHandlingException: [Errno 111] Connection refused
2024-12-21 13:51:47
2024-12-21 13:51:47 ERROR: Application startup failed. Exiting.

Wren AI Information

  • Version: [e.g, 0.1.0]
  • LLM_PROVIDER= openai_llm,
  • GENERATION_MODEL= # gpt-3.5-turbo, llama3:70b, etc.

Additional context
Add any other context about the problem here.

At the same time, when using the executable to start, I often encounter the following issue. I have manually downloaded the docker-compose.yaml file and tried many methods, but still cannot resolve the problem. [It worked before, and I was able to directly access the browser at localhost:3000, but now it no longer works.]
image

I understand that this is an important matter for you, and I will do my best to assist you in clarifying and resolving the issues related to your configuration. If you can share the specific configuration file or details about what you are trying to achieve (e.g., how you want to connect to the platform at https://cloud.siliconflow.cn/, how OpenAI is integrated, and what steps have worked or failed so far), I can provide clearer guidance.

Feel free to share any additional information or details—I'll work through it with you step by step!
image

image

@tsh2018 tsh2018 added the bug Something isn't working label Dec 21, 2024
@cyyeh
Copy link
Member

cyyeh commented Dec 21, 2024

@tsh2018 Hi, you should not put api key in api_key_name, instead you should give it a name like LLM_THUDM_API_KEY, and put LLM_THUDM_API_KEY=<YOUR_API_KEY> in .env file in ~/.wrenai. And then restart wren-ai-service running this command in the ~/.wrenai folder: docker-compose --env-file .env up -d --force-recreate wren-ai-service

Please read the docs here for reference: https://docs.getwren.ai/oss/installation/custom_llm#running-wren-ai-with-your-custom-llm-embedder-or-document-store

Also we've just released a patch, please use this version. Thanks
https://github.com/Canner/WrenAI/releases/tag/0.13.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants