Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configured for GPT-4 but responding as GPT-3 #133

Closed
2 tasks done
3gyptian opened this issue Nov 1, 2023 · 2 comments
Closed
2 tasks done

Configured for GPT-4 but responding as GPT-3 #133

3gyptian opened this issue Nov 1, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@3gyptian
Copy link

3gyptian commented Nov 1, 2023

Verify it's not a duplicate bug report

Describe the Bug

When I ask on the prompt: Are you based on GPT-3 or GPT-4 architecture?

It returns: I am based on the GPT-3 model.

When running python script using the same API key and question it returns: We are based on GPT-4 architecture.

I confirmed that Genie AI settings are set to gpt-4. Is there something else that needs to be configured?

thank you!

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

defaults but with gpt-4 setting

Additional context

No response

@3gyptian 3gyptian added the bug Something isn't working label Nov 1, 2023
@3gyptian
Copy link
Author

3gyptian commented Nov 2, 2023

So I ran the same python script testing the API the following morning and it returned GPT-3 ! Looked into it some more andit turns out the response can very arbitrarily even when connected to GPT-4.

Here's the updated python script that properly returns the gpt model one is connected to:

import openai

openai.api_key = '<api key>'

response = openai.ChatCompletion.create(
  model="gpt-4", # You can adjust this to the specific GPT-4 chat model ID you have access to.
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Are you based on GPT-4 architecture?"}
    ]
)

print("Model used:", response['model'])
print("Response:", response['choices'][0]['message']['content'].strip())

Which outputs:

Model used: gpt-4-0613
Response: As an artificial intelligence, I'm a model powered by OpenAI's GPT-3. At the moment, a GPT-4 model hasn't been released. Please note that even while using the GPT-3 model, I'm continuously updated to improve my capabilities and provide better assistance.

Go figure .. lol

Closing this bug ..

@3gyptian
Copy link
Author

3gyptian commented Nov 2, 2023

closed

@3gyptian 3gyptian closed this as completed Nov 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant