You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using o1-mini and o1 require additional changes beyond only changing the max_completion_tokens.
o1 appears to use a different format streaming sections of the response back within nested calls back.
I found additional problems using AI Genie to work with the "o1" model. Simple queries worked, but extensive queries seemed to come back with responses that AI Genie was not able to parse so it would come back with blank results. I suspect there is a change with nested queries for sections that the client needs to individually query to get the data for each section.
Verify it's not a duplicate bug report
Describe the Bug
OpenAI error 400: {
"error": {
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.",
"type": "invalid_request_error",
"param": "max_tokens",
"code": "unsupported_parameter"
}
}
Please tell us if you have customized any of the extension settings or whether you are using the defaults.
model : o1-mini
Additional context
No response
The text was updated successfully, but these errors were encountered: