Skip to content

Commit

Permalink
Added new docs for litellm hosted models
Browse files Browse the repository at this point in the history
  • Loading branch information
tyfiero committed Dec 31, 2023
1 parent 487b52c commit d9de992
Show file tree
Hide file tree
Showing 12 changed files with 623 additions and 0 deletions.
60 changes: 60 additions & 0 deletions docs/language-model-setup/hosted-models/anyscale.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: Anyscale
---

To use Open Interpreter with a model from Anyscale, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model anyscale/<model-name>
```

```python Python
from interpreter import interpreter

# Set the model to use from AWS Bedrock:
interpreter.llm.model = "anyscale/<model-name>"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support the following completion models from Anyscale:

- Llama 2 7B Chat
- Llama 2 13B Chat
- Llama 2 70B Chat
- Mistral 7B Instruct
- CodeLlama 34b Instruct

<CodeGroup>

```bash Terminal
interpreter --model anyscale/meta-llama/Llama-2-7b-chat-hf
interpreter --model anyscale/meta-llama/Llama-2-13b-chat-hf
interpreter --model anyscale/meta-llama/Llama-2-70b-chat-hf
interpreter --model anyscale/mistralai/Mistral-7B-Instruct-v0.1
interpreter --model anyscale/codellama/CodeLlama-34b-Instruct-hf
```

```python Python
interpreter.llm.model = "anyscale/meta-llama/Llama-2-7b-chat-hf"
interpreter.llm.model = "anyscale/meta-llama/Llama-2-13b-chat-hf"
interpreter.llm.model = "anyscale/meta-llama/Llama-2-70b-chat-hf"
interpreter.llm.model = "anyscale/mistralai/Mistral-7B-Instruct-v0.1"
interpreter.llm.model = "anyscale/codellama/CodeLlama-34b-Instruct-hf"

```

</CodeGroup>

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| -------------------- | -------------------------------------- | --------------------------------------------------------------------------- |
| `ANYSCALE_API_KEY` | The API key for your Anyscale account. | [Anyscale Account Settings](https://app.endpoints.anyscale.com/credentials) |
70 changes: 70 additions & 0 deletions docs/language-model-setup/hosted-models/aws-sagemaker.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
title: AWS Sagemaker
---

To use Open Interpreter with a model from AWS Sagemaker, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model sagemaker/<model-name>
```

```python Python
# Sagemaker requires boto3 to be installed on your machine:
!pip install boto3

from interpreter import interpreter

interpreter.llm.model = "sagemaker/<model-name>"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support the following completion models from AWS Sagemaker:

- Meta Llama 2 7B
- Meta Llama 2 7B (Chat/Fine-tuned)
- Meta Llama 2 13B
- Meta Llama 2 13B (Chat/Fine-tuned)
- Meta Llama 2 70B
- Meta Llama 2 70B (Chat/Fine-tuned)
- Your Custom Huggingface Model

<CodeGroup>

```bash Terminal

interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b
interpreter --model sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f
interpreter --model sagemaker/<your-hugginface-deployment-name>
```

```python Python
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-7b-f"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-13b-f"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b"
interpreter.llm.model = "sagemaker/jumpstart-dft-meta-textgeneration-llama-2-70b-b-f"
interpreter.llm.model = "sagemaker/<your-hugginface-deployment-name>"
```

</CodeGroup>

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| ----------------------- | ----------------------------------------------- | ----------------------------------------------------------------------------------- |
| `AWS_ACCESS_KEY_ID` | The API access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
| `AWS_SECRET_ACCESS_KEY` | The API secret access key for your AWS account. | [AWS Account Overview -> Security Credintials](https://console.aws.amazon.com/) |
| `AWS_REGION_NAME` | The AWS region you want to use | [AWS Account Overview -> Navigation bar -> Region](https://console.aws.amazon.com/) |
57 changes: 57 additions & 0 deletions docs/language-model-setup/hosted-models/baseten.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Baseten
---

To use Open Interpreter with Baseten, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model baseten/<baseten-model>
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "baseten/<baseten-model>"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support the following completion models from Baseten:

- Falcon 7b (qvv0xeq)
- Wizard LM (q841o8w)
- MPT 7b Base (31dxrj3)

<CodeGroup>

```bash Terminal

interpreter --model baseten/qvv0xeq
interpreter --model baseten/q841o8w
interpreter --model baseten/31dxrj3


```

```python Python
interpreter.llm.model = "baseten/qvv0xeq"
interpreter.llm.model = "baseten/q841o8w"
interpreter.llm.model = "baseten/31dxrj3"


```

</CodeGroup>

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| -------------------- | --------------- | -------------------------------------------------------------------------------------------------------- |
| BASETEN_API_KEY'` | Baseten API key | [Baseten Dashboard -> Settings -> Account -> API Keys](https://app.baseten.co/settings/account/api_keys) |
59 changes: 59 additions & 0 deletions docs/language-model-setup/hosted-models/cloudflare.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: Cloudflare Workers AI
---

To use Open Interpreter with the Cloudflare Workers AI API, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model cloudflare/<cloudflare-model>
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "cloudflare/<cloudflare-model>"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support the following completion models from Cloudflare Workers AI:

- Llama-2 7b chat fp16
- Llama-2 7b chat int8
- Mistral 7b instruct v0.1
- CodeLlama 7b instruct awq

<CodeGroup>

```bash Terminal

interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-fp16
interpreter --model cloudflare/@cf/meta/llama-2-7b-chat-int8
interpreter --model @cf/mistral/mistral-7b-instruct-v0.1
interpreter --model @hf/thebloke/codellama-7b-instruct-awq

```

```python Python
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-fp16"
interpreter.llm.model = "cloudflare/@cf/meta/llama-2-7b-chat-int8"
interpreter.llm.model = "@cf/mistral/mistral-7b-instruct-v0.1"
interpreter.llm.model = "@hf/thebloke/codellama-7b-instruct-awq"

```

</CodeGroup>

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| ----------------------- | -------------------------- | ---------------------------------------------------------------------------------------------- |
| `CLOUDFLARE_API_KEY'` | Cloudflare API key | [Cloudflare Profile Page -> API Tokens](https://dash.cloudflare.com/profile/api-tokens) |
| `CLOUDFLARE_ACCOUNT_ID` | Your Cloudflare account ID | [Cloudflare Dashboard -> Overview page -> API section](https://www.perplexity.ai/settings/api) |
64 changes: 64 additions & 0 deletions docs/language-model-setup/hosted-models/deepinfra.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
---
title: DeepInfra
---

To use Open Interpreter with DeepInfra, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model deepinfra/<deepinfra-model>
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "deepinfra/<deepinfra-model>"
interpreter.chat()
```

</CodeGroup>

# Supported Models

We support the following completion models from DeepInfra:

- Llama-2 70b chat hf
- Llama-2 7b chat hf
- Llama-2 13b chat hf
- CodeLlama 34b instruct awq
- Mistral 7b instruct v0.1
- jondurbin/airoboros I2 70b gpt3 1.4.1

<CodeGroup>

```bash Terminal

interpreter --model deepinfra/meta-llama/Llama-2-70b-chat-hf
interpreter --model deepinfra/meta-llama/Llama-2-7b-chat-hf
interpreter --model deepinfra/meta-llama/Llama-2-13b-chat-hf
interpreter --model deepinfra/codellama/CodeLlama-34b-Instruct-hf
interpreter --model deepinfra/mistral/mistral-7b-instruct-v0.1
interpreter --model deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1

```

```python Python
interpreter.llm.model = "deepinfra/meta-llama/Llama-2-70b-chat-hf"
interpreter.llm.model = "deepinfra/meta-llama/Llama-2-7b-chat-hf"
interpreter.llm.model = "deepinfra/meta-llama/Llama-2-13b-chat-hf"
interpreter.llm.model = "deepinfra/codellama/CodeLlama-34b-Instruct-hf"
interpreter.llm.model = "deepinfra/mistral-7b-instruct-v0.1"
interpreter.llm.model = "deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1"

```

</CodeGroup>

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| -------------------- | ----------------- | ---------------------------------------------------------------------- |
| `DEEPINFRA_API_KEY'` | DeepInfra API key | [DeepInfra Dashboard -> API Keys](https://deepinfra.com/dash/api_keys) |
48 changes: 48 additions & 0 deletions docs/language-model-setup/hosted-models/huggingface.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: Huggingface
---

To use Open Interpreter with Huggingface models, set the `model` flag:

<CodeGroup>

```bash Terminal
interpreter --model huggingface/<huggingface-model>
```

```python Python
from interpreter import interpreter

interpreter.llm.model = "huggingface/<huggingface-model>"
interpreter.chat()
```

</CodeGroup>

You may also need to specify your Huggingface api base url:
<CodeGroup>

```bash Terminal
interpreter --api_base <https://my-endpoint.huggingface.cloud>
```

```python Python
from interpreter import interpreter

interpreter.llm.api_base = "https://my-endpoint.huggingface.cloud"
interpreter.chat()
```

</CodeGroup>

# Supported Models

Open Interpreter should work with almost any text based hugging face model.

# Required Environment Variables

Set the following environment variables [(click here to learn how)](https://chat.openai.com/share/1062cdd8-62a1-4aa8-8ec9-eca45645971a) to use these models.

| Environment Variable | Description | Where to Find |
| ---------------------- | --------------------------- | ---------------------------------------------------------------------------------- |
| `HUGGINGFACE_API_KEY'` | Huggingface account API key | [Huggingface -> Settings -> Access Tokens](https://huggingface.co/settings/tokens) |
Loading

0 comments on commit d9de992

Please sign in to comment.