Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API key error occurs when setting model load balancing with Google model #12850

Closed
5 tasks done
fujita-h opened this issue Jan 19, 2025 · 3 comments · Fixed by #12853
Closed
5 tasks done

API key error occurs when setting model load balancing with Google model #12850

fujita-h opened this issue Jan 19, 2025 · 3 comments · Fixed by #12853
Labels
🐞 bug Something isn't working

Comments

@fujita-h
Copy link
Contributor

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.15.1

Cloud or Self Hosted

Cloud, Self Hosted (Source)

Steps to reproduce

  1. Setup Google Gemini Model
  2. Add Load Balancing Setting to a model, for example "Gemini 1.5 Flash"
  3. Create simple flow uses "Gemini 1.5 Flash"
  4. Run the flow several times

✔️ Expected Behavior

It is load balanced and the flow succeeds every time.

❌ Actual Behavior

An error occurs at the timing of the key that is additionally set for load balancing.

Image

@dosubot dosubot bot added the 🐞 bug Something isn't working label Jan 19, 2025
Copy link

dosubot bot commented Jan 19, 2025

I found a similar unsolved discussion regarding model load balancing usage issues, which might be relevant to your situation. You can check it out here [1].

Regarding the API key error you're encountering, Dify returns a specific error message and code when an API key is invalid or misconfigured. The error code is 1001, and the message is "Invalid Authorization header format. Expected 'Bearer ' format." This might be related to the 400 Bad Request error you're seeing [2][3].

Additionally, Dify validates API keys for Google models during the load balancing process using the validateLoadBalancingCredentials function. This function checks the credentials by sending a POST request to a specific URL endpoint. If the validation fails, it returns an error status and message [4][5].

Ensure that your API key is correctly formatted and that the credentials are valid. If the issue persists, you might want to review the load balancing configuration and ensure that all settings are correctly applied.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@kurokobo
Copy link
Contributor

@fujita-h
Hi, thanks for the report and PR.

In my understanding, this issue can occur not only with Google but also with any provider that does not have model_credential_schema.

The model_credential_schema is a schema designed to support Customizable Models for the provider, which means that providers that only support predefined-model typically do not have model_credential_schema. Therefore, the same issue can arise in providers that do not support Customizable Models (e.g., Anthropic and many others) as well.

As a potential fix, adding model_credential_schema to Google is one option, but adopting that proposal would mean making model_credential_schema mandatory for all providers that do not use Customizable Models.

In the first place, the Add Config for LB screen shows the form based on credential_form_schemas from provider_credential_schema or model_credential_schema depending on the configuration method (predefined or customizable).

providerFormSchemaPredefined
? provider.provider_credential_schema.credential_form_schemas
: provider.model_credential_schema.credential_form_schemas

Therefore, to determine variables to be decrypted, it seems more natural to reference provider_credential_schema or model_credential_schema depending on the configuration method as well, rather than strictly referring to model_credential_schema.

@fujita-h
Copy link
Contributor Author

@kurokobo
Thank you for checking.

I also created a PR that makes the way provider_credential_schema and model_credential_schema are referenced consistent with the web.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
2 participants