Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ollama error #14330

Closed
5 tasks done
xuesd opened this issue Feb 25, 2025 · 3 comments
Closed
5 tasks done

add ollama error #14330

xuesd opened this issue Feb 25, 2025 · 3 comments
Labels
🐞 bug Something isn't working 🤔 cant-reproduce We can not reproduce it or the information is limited

Comments

@xuesd
Copy link

xuesd commented Feb 25, 2025

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

1.0.0-beta

Cloud or Self Hosted

Self Hosted (Source)

Steps to reproduce

Image

Image
When adding the ollama configuration in the model supplier interface, the ollama is set to local 192.168.xx.xx: 11434, and the backend will prompt an error message

✔️ Expected Behavior

add ollama success

❌ Actual Behavior

the backend will prompt an error message

@crazywoola crazywoola added the 🤔 cant-reproduce We can not reproduce it or the information is limited label Feb 25, 2025
@dosubot dosubot bot added the 🐞 bug Something isn't working label Feb 25, 2025
Copy link

dosubot bot commented Feb 25, 2025

Hey @xuesd! 👋 I'm here to help you squash bugs, answer questions, and even guide you on becoming a contributor. Let's tackle this issue together!

I found a similar solved discussion that might help you resolve the issue with adding an Ollama configuration:

  1. Adding a local ollama model error: The solution involved upgrading Ollama to the latest version and using the new endpoint api/embed as the old one api/embedings has been deprecated [1].

Additionally, here are some troubleshooting steps you can follow to resolve the error when adding an Ollama configuration with the local address:

  1. Ensure the Ollama service is running and accessible at the specified URL on the host machine.
  2. Set the OLLAMA_HOST environment variable to 0.0.0.0 to make it accessible from other machines and Docker containers.
  3. Use the correct IP address for Docker. Instead of using localhost, use the Docker host's IP address, such as the local network IP address or the Docker host machine IP address.
  4. Adjust firewall settings to allow incoming connections on port 11434.
  5. Restart the Ollama application after setting the environment variable.

These steps should help in making the Ollama service accessible from the Docker container [2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@crazywoola
Copy link
Member

I don't think there is a model called ds-r1 provided by Ollama. If you downloaded the deepseek-r1:8b from Ollama, please enter the full name.

@xuesd
Copy link
Author

xuesd commented Feb 25, 2025

Thank you, I have abbreviated it here. Originally it was deepseek-r1:14b, which is the same mistake

@dosubot dosubot bot mentioned this issue Feb 25, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working 🤔 cant-reproduce We can not reproduce it or the information is limited
Projects
None yet
Development

No branches or pull requests

2 participants