You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
cd dify
cd docker
cp .env.example .env
docker-compose up -d
Add model in model provider, select OpenAI-API-compatible
If I serve model with port 8000, everything works fine. But I want to serve few models with vllm using different ports.
✔️ Expected Behavior
I can add many models successfully
❌ Actual Behavior
Traceback (most recent call last):
File "/app/api/controllers/console/workspace/models.py", line 154, in post
model_provider_service.save_model_credentials(
File "/app/api/services/model_provider_service.py", line 242, in save_model_credentials
provider_configuration.add_or_update_custom_model_credentials(
File "/app/api/core/entities/provider_configuration.py", line 393, in add_or_update_custom_model_credentials
provider_model_record, credentials = self.custom_model_credentials_validate(model_type, model, credentials)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/api/core/entities/provider_configuration.py", line 373, in custom_model_credentials_validate
credentials = model_provider_factory.model_credentials_validate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/api/core/model_runtime/model_providers/model_provider_factory.py", line 123, in model_credentials_validate
model_instance.validate_credentials(model, filtered_credentials)
File "/app/api/core/model_runtime/model_providers/openai_api_compatible/text_embedding/text_embedding.py", line 172, in validate_credentials
raise CredentialsValidateFailedError(str(ex))
core.model_runtime.errors.validate.CredentialsValidateFailedError: HTTPConnectionPool(host='192.168.80.25', port=1724): Max retries exceeded with url: /v1/embeddings (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f0046241d00>, 'Connection to 192.168.80.25 timed out. (connect timeout=10)'))
The text was updated successfully, but these errors were encountered:
@Thuandv-seta Hello there! 👋 I'm here to help you with your bug report. I can assist with solving bugs, answering questions, and even guide you on how to become a contributor. Let's squash those bugs together!
I found two similar unsolved discussions that might be relevant to your issue:
Regarding your issue, the error message indicates a ConnectTimeoutError, which suggests that the connection to the specified host and port is timing out. Here are a few things you might consider checking:
Network Configuration: Ensure that the network settings allow connections to the specified host and port. Verify that there are no firewall rules or network policies blocking the connection.
Port Availability: Since serving the model on port 8000 works fine, ensure that the port 1724 is open and not being used by another service.
Timeout Settings: You might want to adjust the timeout settings to allow more time for the connection to be established. Dify allows configuration of HTTP connection settings, including timeouts, through environment variables. You can set these in your .env file:
HTTP_REQUEST_MAX_CONNECT_TIMEOUT
HTTP_REQUEST_MAX_READ_TIMEOUT
HTTP_REQUEST_MAX_WRITE_TIMEOUT
For example, increasing the HTTP_REQUEST_MAX_CONNECT_TIMEOUT might help if the connection is taking longer than expected to establish [3].
Endpoint Configuration: Double-check the API endpoint URL configuration to ensure it is correct and accessible from the environment where Dify is running.
These steps should help you troubleshoot the connection timeout issue.
I don't think this is related to dify itself. If you can connect via default config then it should work with other ports. I suggest you checking the configuration of the provider itself.
Self Checks
Dify version
latest
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Add model in model provider, select OpenAI-API-compatible

If I serve model with port 8000, everything works fine. But I want to serve few models with vllm using different ports.
✔️ Expected Behavior
I can add many models successfully
❌ Actual Behavior
The text was updated successfully, but these errors were encountered: