Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any integration case for dify to connect to the locally deployed jina-embeddings-v3 model? #12243

Closed
4 tasks done
lanqy123 opened this issue Dec 31, 2024 · 4 comments
Closed
4 tasks done
Labels
👻 feat:rag Embedding related issue, like qdrant, weaviate, milvus, vector database.

Comments

@lanqy123
Copy link

Self Checks

  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Provide a description of requested docs changes

Is there any integration case for dify to connect to the locally deployed jina-embeddings-v3 model?

@dosubot dosubot bot added the 👻 feat:rag Embedding related issue, like qdrant, weaviate, milvus, vector database. label Dec 31, 2024
Copy link

dosubot bot commented Dec 31, 2024

Yes, there is an integration case for Dify to connect to a locally deployed jina-embeddings-v3 model using Xinference. You can configure this by setting up the model in Dify under Settings > Model Provider > Xinference and selecting the Embeddings type in the configuration box. To deploy Xinference locally, install it via PyPI and start it using the xinference-local command. After deploying the model, connect it in Dify by entering the model name, server URL, and model UID in the settings [1][2][3].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@leslie2046
Copy link
Contributor

图片
How about have a try in xinference?

@lanqy123
Copy link
Author

image
When I tried to install "xinference[all]" on my GPU using pip, I encountered an error as shown in the image: "Failed to build llama-cpp-python. ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)."

@leslie2046
Copy link
Contributor

I have also encountered this problem before,you can take a look at this:
xorbitsai/inference#1050

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
👻 feat:rag Embedding related issue, like qdrant, weaviate, milvus, vector database.
Projects
None yet
Development

No branches or pull requests

3 participants