Skip to content

Commit

Permalink
Merge branch 'main' into litellm_return-response_headers
Browse files Browse the repository at this point in the history
  • Loading branch information
ishaan-jaff authored Jul 21, 2024
2 parents 82764d2 + 36cb63c commit f622562
Show file tree
Hide file tree
Showing 37 changed files with 570 additions and 89 deletions.
1 change: 1 addition & 0 deletions docs/my-website/docs/providers/anthropic.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ Anthropic API fails requests when `max_tokens` are not passed. Due to this litel
import os

os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
# os.environ["ANTHROPIC_API_BASE"] = "" # [OPTIONAL] or 'ANTHROPIC_BASE_URL'
```

## Usage
Expand Down
104 changes: 104 additions & 0 deletions docs/my-website/docs/providers/mistral.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Mistral AI API
https://docs.mistral.ai/api/

Expand Down Expand Up @@ -41,9 +44,106 @@ for chunk in response:
```



## Usage with LiteLLM Proxy

### 1. Set Mistral Models on config.yaml

```yaml
model_list:
- model_name: mistral-small-latest
litellm_params:
model: mistral/mistral-small-latest
api_key: "os.environ/MISTRAL_API_KEY" # ensure you have `MISTRAL_API_KEY` in your .env
```
### 2. Start Proxy
```
litellm --config config.yaml
```

### 3. Test it


<Tabs>
<TabItem value="Curl" label="Curl Request">

```shell
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "mistral-small-latest",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
```
</TabItem>
<TabItem value="openai" label="OpenAI v1.0.0+">

```python
import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)

response = client.chat.completions.create(model="mistral-small-latest", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])

print(response)

```
</TabItem>
<TabItem value="langchain" label="Langchain">

```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI(
openai_api_base="http://0.0.0.0:4000", # set openai_api_base to the LiteLLM Proxy
model = "mistral-small-latest",
temperature=0.1
)

messages = [
SystemMessage(
content="You are a helpful assistant that im using to make a test request to."
),
HumanMessage(
content="test from litellm. tell me why it's amazing in 1 sentence"
),
]
response = chat(messages)

print(response)
```
</TabItem>
</Tabs>

## Supported Models

:::info
All models listed here https://docs.mistral.ai/platform/endpoints are supported. We actively maintain the list of models, pricing, token window, etc. [here](https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json).

:::


| Model Name | Function Call |
|----------------|--------------------------------------------------------------|
| Mistral Small | `completion(model="mistral/mistral-small-latest", messages)` |
Expand All @@ -53,6 +153,10 @@ All models listed here https://docs.mistral.ai/platform/endpoints are supported.
| Mixtral 8x7B | `completion(model="mistral/open-mixtral-8x7b", messages)` |
| Mixtral 8x22B | `completion(model="mistral/open-mixtral-8x22b", messages)` |
| Codestral | `completion(model="mistral/codestral-latest", messages)` |
| Mistral NeMo | `completion(model="mistral/open-mistral-nemo", messages)` |
| Mistral NeMo 2407 | `completion(model="mistral/open-mistral-nemo-2407", messages)` |
| Codestral Mamba | `completion(model="mistral/open-codestral-mamba", messages)` |
| Codestral Mamba | `completion(model="mistral/codestral-mamba-latest"", messages)` |

## Function Calling

Expand Down
Loading

0 comments on commit f622562

Please sign in to comment.