-
Notifications
You must be signed in to change notification settings - Fork 824
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feature: AI pack for workflows, providers and examples. (#3091)
- Loading branch information
1 parent
013ae52
commit 4952cc7
Showing
34 changed files
with
1,539 additions
and
21 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
--- | ||
title: "Anthropic Provider" | ||
description: "The Anthropic Provider allows for integrating Anthropic's Claude language models into Keep." | ||
--- | ||
|
||
<Tip> | ||
The Anthropic Provider supports querying Claude language models for prompt-based | ||
interactions. | ||
</Tip> | ||
|
||
## Inputs | ||
|
||
The Claude Provider supports the following imputs: | ||
|
||
- `prompt`: Interact with Claude models by sending prompts and receiving responses | ||
- `model`: The model to be used, defaults to `claude-3-sonnet-20240229` | ||
- `max_tokens`: Limit amount of tokens returned by the model, default 1024. | ||
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub). | ||
|
||
## Outputs | ||
|
||
Currently, the Claude Provider outputs the response from the model based on the prompt provided. | ||
|
||
## Authentication Parameters | ||
|
||
To use the Claude Provider, you'll need an API Key from Anthropic. The required parameter for authentication is: | ||
|
||
- **api_key** (required): Your Anthropic API Key. | ||
|
||
## Connecting with the Provider | ||
|
||
To connect to Claude, you'll need to obtain an API Key: | ||
|
||
1. Log in to your Anthropic account at [Anthropic Console](https://console.anthropic.com). | ||
2. Navigate to the **API Keys** section. | ||
3. Click on **Create Key** to generate a new API key for Keep. | ||
|
||
Use the generated API key in the `authentication` section of your Claude Provider configuration. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
--- | ||
title: "Gemini Provider" | ||
description: "The Gemini Provider allows for integrating Google's Gemini language models into Keep." | ||
--- | ||
|
||
<Tip> | ||
The Gemini Provider supports querying Gemini language models for prompt-based | ||
interactions. | ||
</Tip> | ||
|
||
## Inputs | ||
|
||
The Gemini Provider supports the following inputs: | ||
|
||
- `prompt`: Interact with Gemini models by sending prompts and receiving responses | ||
- `model`: The model to be used, defaults to `gemini-pro` | ||
- `max_tokens`: Limit amount of tokens returned by the model, default 1024. | ||
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub). | ||
|
||
## Outputs | ||
|
||
Currently, the Gemini Provider outputs the response from the model based on the prompt provided. | ||
|
||
## Authentication Parameters | ||
|
||
To use the Gemini Provider, you'll need an API Key from Google AI Studio. The required parameter for authentication is: | ||
|
||
- **api_key** (required): Your Google AI API Key. | ||
|
||
## Connecting with the Provider | ||
|
||
To connect to Gemini, you'll need to obtain an API Key: | ||
|
||
1. Go to [Google AI Studio](https://makersuite.google.com/app/apikey). | ||
2. Click on **Create API Key** or use an existing one. | ||
3. Copy your API key for Keep. | ||
|
||
Use the generated API key in the `authentication` section of your Gemini Provider configuration. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
--- | ||
title: "Grok Provider" | ||
description: "The Grok Provider allows for integrating X.AI's Grok language models into Keep." | ||
--- | ||
|
||
<Tip> | ||
The Grok Provider supports querying Grok language models for prompt-based | ||
interactions. | ||
</Tip> | ||
|
||
## Inputs | ||
|
||
The Grok Provider supports the following inputs: | ||
|
||
- `prompt`: Interact with Grok models by sending prompts and receiving responses | ||
- `model`: The model to be used, defaults to `grok-1` | ||
- `max_tokens`: Limit amount of tokens returned by the model, default 1024. | ||
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub). | ||
|
||
## Outputs | ||
|
||
Currently, the Grok Provider outputs the response from the model based on the prompt provided. | ||
|
||
## Authentication Parameters | ||
|
||
To use the Grok Provider, you'll need an API Key from X.AI. The required parameter for authentication is: | ||
|
||
- **api_key** (required): Your X.AI API Key. | ||
|
||
## Connecting with the Provider | ||
|
||
To connect to Grok, you'll need to obtain an API Key: | ||
|
||
1. Subscribe to Grok on X.AI platform. | ||
2. Navigate to the API section in your X.AI account settings. | ||
3. Generate a new API key for Keep. | ||
|
||
Use the generated API key in the `authentication` section of your Grok Provider configuration. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
--- | ||
title: "Llama.cpp Provider" | ||
description: "The Llama.cpp Provider allows for integrating locally running Llama.cpp models into Keep." | ||
--- | ||
|
||
<Tip> | ||
The Llama.cpp Provider supports querying local Llama.cpp models for prompt-based | ||
interactions. Make sure you have Llama.cpp server running locally with your desired model. | ||
</Tip> | ||
|
||
### **Cloud Limitation** | ||
This provider is disabled for cloud environments and can only be used in local or self-hosted environments. | ||
|
||
## Inputs | ||
|
||
The Llama.cpp Provider supports the following inputs: | ||
|
||
- `prompt`: Interact with Llama.cpp models by sending prompts and receiving responses | ||
- `max_tokens`: Limit amount of tokens returned by the model, default 1024 | ||
|
||
## Outputs | ||
|
||
Currently, the Llama.cpp Provider outputs the response from the model based on the prompt provided. | ||
|
||
## Authentication Parameters | ||
|
||
The Llama.cpp Provider requires the following configuration parameters: | ||
|
||
- **host** (required): The Llama.cpp server host URL, defaults to "http://localhost:8080" | ||
|
||
## Connecting with the Provider | ||
|
||
To use the Llama.cpp Provider: | ||
|
||
1. Install Llama.cpp on your system | ||
2. Download or convert your model to GGUF format | ||
3. Start the Llama.cpp server with HTTP interface: | ||
```bash | ||
./server --model /path/to/your/model.gguf --host 0.0.0.0 --port 8080 | ||
``` | ||
4. Configure the host URL and model path in your Keep configuration | ||
|
||
## Prerequisites | ||
|
||
- Llama.cpp must be installed and compiled with server support | ||
- A GGUF format model file must be available on your system | ||
- The Llama.cpp server must be running and accessible | ||
- The server must have sufficient resources to load and run your model | ||
|
||
## Model Compatibility | ||
|
||
The provider works with any GGUF format model compatible with Llama.cpp, including: | ||
- LLaMA and LLaMA-2 models | ||
- Mistral models | ||
- OpenLLaMA models | ||
- Vicuna models | ||
- And other compatible model architectures | ||
|
||
Make sure your model is in GGUF format before using it with the provider. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
--- | ||
title: "Ollama Provider" | ||
description: "The Ollama Provider allows for integrating locally running Ollama language models into Keep." | ||
--- | ||
|
||
<Tip> | ||
The Ollama Provider supports querying local Ollama models for prompt-based | ||
interactions. Make sure you have Ollama installed and running locally with your desired models. | ||
</Tip> | ||
|
||
### **Cloud Limitation** | ||
This provider is disabled for cloud environments and can only be used in local or self-hosted environments. | ||
|
||
## Inputs | ||
|
||
The Ollama Provider supports the following inputs: | ||
|
||
- `prompt`: Interact with Ollama models by sending prompts and receiving responses | ||
- `model`: The model to be used, defaults to `llama2` (must be pulled in Ollama first) | ||
- `max_tokens`: Limit amount of tokens returned by the model, default 1024. | ||
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub). | ||
|
||
## Outputs | ||
|
||
Currently, the Ollama Provider outputs the response from the model based on the prompt provided. | ||
|
||
## Authentication Parameters | ||
|
||
The Ollama Provider requires the following configuration parameter: | ||
|
||
- **host** (required): The Ollama API host URL, defaults to "http://localhost:11434" | ||
|
||
## Connecting with the Provider | ||
|
||
To use the Ollama Provider: | ||
|
||
1. Install Ollama on your system from [Ollama's website](https://ollama.ai). | ||
2. Start the Ollama service. | ||
3. Pull your desired model(s) using `ollama pull model-name`. | ||
4. Configure the host URL in your Keep configuration. | ||
|
||
## Prerequisites | ||
|
||
- Ollama must be installed and running on your system. | ||
- The desired models must be pulled and available in your Ollama installation. | ||
- The Ollama API must be accessible from the host where Keep is running. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
id: auto-fix-mysql-table-overflow | ||
description: Clean heavy mysql tables after consulting with OpenAI using structured output | ||
triggers: | ||
- type: manual | ||
|
||
steps: | ||
- name: ask-openai-if-this-workflow-is-applicable | ||
provider: | ||
config: "{{ providers.my_openai }}" | ||
type: openai | ||
with: | ||
prompt: "There is a task cleaning MySQL database. Should we run the task if we received an alert with such a name {{ alert.name }}?" | ||
model: "gpt-4o-mini" # This model supports structured output | ||
structured_output_format: # We limit what model could return | ||
type: json_schema | ||
json_schema: | ||
name: workflow_applicability | ||
schema: | ||
type: object | ||
properties: | ||
should_run: | ||
type: boolean | ||
description: "Whether the workflow should be executed based on the alert" | ||
required: ["should_run"] | ||
additionalProperties: false | ||
strict: true | ||
|
||
actions: | ||
- name: clean-db-step | ||
if: "{{ steps.ask-openai-if-this-workflow-is-applicable.results.response.should_run }}" | ||
provider: | ||
config: "{{ providers.mysql }}" | ||
type: mysql | ||
with: | ||
query: DELETE FROM bookstore.cache ORDER BY id DESC LIMIT 100; |
Oops, something went wrong.