Skip to content

Commit

Permalink
feature: AI pack for workflows, providers and examples. (#3091)
Browse files Browse the repository at this point in the history
  • Loading branch information
Matvey-Kuk authored Jan 23, 2025
1 parent 013ae52 commit 4952cc7
Show file tree
Hide file tree
Showing 34 changed files with 1,539 additions and 21 deletions.
5 changes: 5 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@
"pages": [
"providers/documentation/aks-provider",
"providers/documentation/amazonsqs-provider",
"providers/documentation/anthropic-provider",
"providers/documentation/appdynamics-provider",
"providers/documentation/s3-provider",
"providers/documentation/argocd-provider",
Expand All @@ -133,6 +134,7 @@
"providers/documentation/dynatrace-provider",
"providers/documentation/elastic-provider",
"providers/documentation/gcpmonitoring-provider",
"providers/documentation/gemini-provider",
"providers/documentation/github-provider",
"providers/documentation/github_workflows_provider",
"providers/documentation/gitlab-provider",
Expand All @@ -143,6 +145,7 @@
"providers/documentation/grafana_incident-provider",
"providers/documentation/grafana_oncall-provider",
"providers/documentation/graylog-provider",
"providers/documentation/grok-provider",
"providers/documentation/http-provider",
"providers/documentation/ilert-provider",
"providers/documentation/incidentio-provider",
Expand All @@ -155,6 +158,7 @@
"providers/documentation/kubernetes-provider",
"providers/documentation/linear_provider",
"providers/documentation/linearb-provider",
"providers/documentation/llamacpp-provider",
"providers/documentation/mailchimp-provider",
"providers/documentation/mailgun-provider",
"providers/documentation/mattermost-provider",
Expand All @@ -166,6 +170,7 @@
"providers/documentation/netdata-provider",
"providers/documentation/new-relic-provider",
"providers/documentation/ntfy-provider",
"providers/documentation/ollama-provider",
"providers/documentation/openai-provider",
"providers/documentation/openobserve-provider",
"providers/documentation/openshift-provider",
Expand Down
38 changes: 38 additions & 0 deletions docs/providers/documentation/anthropic-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: "Anthropic Provider"
description: "The Anthropic Provider allows for integrating Anthropic's Claude language models into Keep."
---

<Tip>
The Anthropic Provider supports querying Claude language models for prompt-based
interactions.
</Tip>

## Inputs

The Claude Provider supports the following imputs:

- `prompt`: Interact with Claude models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `claude-3-sonnet-20240229`
- `max_tokens`: Limit amount of tokens returned by the model, default 1024.
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub).

## Outputs

Currently, the Claude Provider outputs the response from the model based on the prompt provided.

## Authentication Parameters

To use the Claude Provider, you'll need an API Key from Anthropic. The required parameter for authentication is:

- **api_key** (required): Your Anthropic API Key.

## Connecting with the Provider

To connect to Claude, you'll need to obtain an API Key:

1. Log in to your Anthropic account at [Anthropic Console](https://console.anthropic.com).
2. Navigate to the **API Keys** section.
3. Click on **Create Key** to generate a new API key for Keep.

Use the generated API key in the `authentication` section of your Claude Provider configuration.
38 changes: 38 additions & 0 deletions docs/providers/documentation/gemini-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: "Gemini Provider"
description: "The Gemini Provider allows for integrating Google's Gemini language models into Keep."
---

<Tip>
The Gemini Provider supports querying Gemini language models for prompt-based
interactions.
</Tip>

## Inputs

The Gemini Provider supports the following inputs:

- `prompt`: Interact with Gemini models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `gemini-pro`
- `max_tokens`: Limit amount of tokens returned by the model, default 1024.
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub).

## Outputs

Currently, the Gemini Provider outputs the response from the model based on the prompt provided.

## Authentication Parameters

To use the Gemini Provider, you'll need an API Key from Google AI Studio. The required parameter for authentication is:

- **api_key** (required): Your Google AI API Key.

## Connecting with the Provider

To connect to Gemini, you'll need to obtain an API Key:

1. Go to [Google AI Studio](https://makersuite.google.com/app/apikey).
2. Click on **Create API Key** or use an existing one.
3. Copy your API key for Keep.

Use the generated API key in the `authentication` section of your Gemini Provider configuration.
38 changes: 38 additions & 0 deletions docs/providers/documentation/grok-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: "Grok Provider"
description: "The Grok Provider allows for integrating X.AI's Grok language models into Keep."
---

<Tip>
The Grok Provider supports querying Grok language models for prompt-based
interactions.
</Tip>

## Inputs

The Grok Provider supports the following inputs:

- `prompt`: Interact with Grok models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `grok-1`
- `max_tokens`: Limit amount of tokens returned by the model, default 1024.
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub).

## Outputs

Currently, the Grok Provider outputs the response from the model based on the prompt provided.

## Authentication Parameters

To use the Grok Provider, you'll need an API Key from X.AI. The required parameter for authentication is:

- **api_key** (required): Your X.AI API Key.

## Connecting with the Provider

To connect to Grok, you'll need to obtain an API Key:

1. Subscribe to Grok on X.AI platform.
2. Navigate to the API section in your X.AI account settings.
3. Generate a new API key for Keep.

Use the generated API key in the `authentication` section of your Grok Provider configuration.
59 changes: 59 additions & 0 deletions docs/providers/documentation/llamacpp-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: "Llama.cpp Provider"
description: "The Llama.cpp Provider allows for integrating locally running Llama.cpp models into Keep."
---

<Tip>
The Llama.cpp Provider supports querying local Llama.cpp models for prompt-based
interactions. Make sure you have Llama.cpp server running locally with your desired model.
</Tip>

### **Cloud Limitation**
This provider is disabled for cloud environments and can only be used in local or self-hosted environments.

## Inputs

The Llama.cpp Provider supports the following inputs:

- `prompt`: Interact with Llama.cpp models by sending prompts and receiving responses
- `max_tokens`: Limit amount of tokens returned by the model, default 1024

## Outputs

Currently, the Llama.cpp Provider outputs the response from the model based on the prompt provided.

## Authentication Parameters

The Llama.cpp Provider requires the following configuration parameters:

- **host** (required): The Llama.cpp server host URL, defaults to "http://localhost:8080"

## Connecting with the Provider

To use the Llama.cpp Provider:

1. Install Llama.cpp on your system
2. Download or convert your model to GGUF format
3. Start the Llama.cpp server with HTTP interface:
```bash
./server --model /path/to/your/model.gguf --host 0.0.0.0 --port 8080
```
4. Configure the host URL and model path in your Keep configuration

## Prerequisites

- Llama.cpp must be installed and compiled with server support
- A GGUF format model file must be available on your system
- The Llama.cpp server must be running and accessible
- The server must have sufficient resources to load and run your model

## Model Compatibility

The provider works with any GGUF format model compatible with Llama.cpp, including:
- LLaMA and LLaMA-2 models
- Mistral models
- OpenLLaMA models
- Vicuna models
- And other compatible model architectures

Make sure your model is in GGUF format before using it with the provider.
46 changes: 46 additions & 0 deletions docs/providers/documentation/ollama-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
title: "Ollama Provider"
description: "The Ollama Provider allows for integrating locally running Ollama language models into Keep."
---

<Tip>
The Ollama Provider supports querying local Ollama models for prompt-based
interactions. Make sure you have Ollama installed and running locally with your desired models.
</Tip>

### **Cloud Limitation**
This provider is disabled for cloud environments and can only be used in local or self-hosted environments.

## Inputs

The Ollama Provider supports the following inputs:

- `prompt`: Interact with Ollama models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `llama2` (must be pulled in Ollama first)
- `max_tokens`: Limit amount of tokens returned by the model, default 1024.
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub).

## Outputs

Currently, the Ollama Provider outputs the response from the model based on the prompt provided.

## Authentication Parameters

The Ollama Provider requires the following configuration parameter:

- **host** (required): The Ollama API host URL, defaults to "http://localhost:11434"

## Connecting with the Provider

To use the Ollama Provider:

1. Install Ollama on your system from [Ollama's website](https://ollama.ai).
2. Start the Ollama service.
3. Pull your desired model(s) using `ollama pull model-name`.
4. Configure the host URL in your Keep configuration.

## Prerequisites

- Ollama must be installed and running on your system.
- The desired models must be pulled and available in your Ollama installation.
- The Ollama API must be accessible from the host where Keep is running.
4 changes: 3 additions & 1 deletion docs/providers/documentation/openai-provider.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,10 @@ description: "The OpenAI Provider allows for integrating OpenAI's language model

The OpenAI Provider supports the following functions:

- `query`: Interact with OpenAI's models by sending prompts and receiving responses
- `prompt`: Interact with OpenAI's models by sending prompts and receiving responses
- `model`: The model to be used, defaults to `gpt-3.5-turbo`
- `max_tokens`: Limit amount of tokens returned by the model, default 1024.
- `structured_output_format`: Optional JSON format for the structured output (check examples at the GitHub).

## Outputs

Expand Down
32 changes: 31 additions & 1 deletion docs/providers/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/amazonsqs.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Anthropic"
href="/providers/documentation/anthropic-provider"
icon={ <img src="https://img.logo.dev/anthropic.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="AppDynamics"
href="/providers/documentation/appdynamics-provider"
Expand Down Expand Up @@ -138,6 +144,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/googlecloudpresscorner.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Gemini"
href="/providers/documentation/gemini-provider"
icon={ <img src="https://img.logo.dev/gemini.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="GitHub"
href="/providers/documentation/github-provider"
Expand Down Expand Up @@ -198,6 +210,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/graylog.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Grok"
href="/providers/documentation/grok-provider"
icon={ <img src="https://img.logo.dev/grok.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="HTTP"
href="/providers/documentation/http-provider"
Expand Down Expand Up @@ -270,6 +288,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/linearb.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Llama.cpp"
href="/providers/documentation/llamacpp-provider"
icon={ <img src="https://img.logo.dev/llama.cpp.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Mailchimp"
href="/providers/documentation/mailchimp-provider"
Expand Down Expand Up @@ -330,6 +354,12 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
icon={ <img src="https://img.logo.dev/ntfy.sh.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="Ollama"
href="/providers/documentation/ollama-provider"
icon={ <img src="https://img.logo.dev/ollama.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>

<Card
title="OpenAI"
href="/providers/documentation/openai-provider"
Expand Down Expand Up @@ -581,4 +611,4 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
href="/providers/documentation/zenduty-provider"
icon={ <img src="https://img.logo.dev/zenduty.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" /> }
></Card>
</CardGroup>
</CardGroup>
35 changes: 35 additions & 0 deletions examples/workflows/conditionally_run_if_ai_says_so.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
id: auto-fix-mysql-table-overflow
description: Clean heavy mysql tables after consulting with OpenAI using structured output
triggers:
- type: manual

steps:
- name: ask-openai-if-this-workflow-is-applicable
provider:
config: "{{ providers.my_openai }}"
type: openai
with:
prompt: "There is a task cleaning MySQL database. Should we run the task if we received an alert with such a name {{ alert.name }}?"
model: "gpt-4o-mini" # This model supports structured output
structured_output_format: # We limit what model could return
type: json_schema
json_schema:
name: workflow_applicability
schema:
type: object
properties:
should_run:
type: boolean
description: "Whether the workflow should be executed based on the alert"
required: ["should_run"]
additionalProperties: false
strict: true

actions:
- name: clean-db-step
if: "{{ steps.ask-openai-if-this-workflow-is-applicable.results.response.should_run }}"
provider:
config: "{{ providers.mysql }}"
type: mysql
with:
query: DELETE FROM bookstore.cache ORDER BY id DESC LIMIT 100;
Loading

0 comments on commit 4952cc7

Please sign in to comment.