-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[9.0] [Obs Ai Assistant] Add system message (#209773) #211397
[9.0] [Obs Ai Assistant] Add system message (#209773) #211397
Conversation
Fix: System Message Missing in Inference Plugin Closes elastic#209548 ## Summary A regression was introduced in 8.18 ([elastic#199286](elastic#199286)), where the system message is no longer passed to the inference plugin and, consequently, the LLM. Currently, only user messages are being sent, which impacts conversation guidance and guardrails. The system message is crucial for steering responses and maintaining contextual integrity. The filtering of the system message happens here: https://github.com/elastic/kibana/blob/771a080ffa99e501e72c9cb98c833795769483ae/x-pack/platform/plugins/shared/observability_ai_assistant/server/service/client/index.ts#L510-L512 Fix Approach - Ensure the `system` message is included as a parameter in `inferenceClient.chatComplete.` ```typescript const options = { connectorId, system, messages: convertMessagesForInference(messages), toolChoice, tools, functionCalling: (simulateFunctionCalling ? 'simulated' : 'native') as FunctionCallingMode, }; if (stream) { return defer(() => this.dependencies.inferenceClient.chatComplete({ ...options, stream: true, }) ).pipe( convertInferenceEventsToStreamingEvents(), instrumentAndCountTokens(name), failOnNonExistingFunctionCall({ functions }), tap((event) => { if ( event.type === StreamingChatResponseEventType.ChatCompletionChunk && this.dependencies.logger.isLevelEnabled('trace') ) { this.dependencies.logger.trace(`Received chunk: ${JSON.stringify(event.message)}`); } }), shareReplay() ) as TStream extends true ? Observable<ChatCompletionChunkEvent | TokenCountEvent | ChatCompletionMessageEvent> : never; } else { return this.dependencies.inferenceClient.chatComplete({ ...options, stream: false, }) as TStream extends true ? never : Promise<ChatCompleteResponse>; } } ``` - Add an API test to verify that the system message is correctly passed to the LLM. (cherry picked from commit 0ae28aa)
Pinging @elastic/obs-ai-assistant (Team:Obs AI Assistant) |
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
💛 Build succeeded, but was flaky
Failed CI StepsTest Failures
Metrics [docs]Public APIs missing comments
Async chunks
Page load bundle
History |
Backport
This will backport the following commits from
main
to9.0
:Questions ?
Please refer to the Backport tool documentation