Skip to content

Commit

Permalink
add getting started section to docs
Browse files Browse the repository at this point in the history
  • Loading branch information
krrishdholakia committed Sep 10, 2023
1 parent c12ceb6 commit 670f51e
Show file tree
Hide file tree
Showing 3 changed files with 101 additions and 7 deletions.
69 changes: 69 additions & 0 deletions docs/my-website/docs/getting_started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Getting Started

LiteLLM simplifies LLM API calls by mapping them all to the [OpenAI ChatCompletion format](https://platform.openai.com/docs/api-reference/chat).

## basic usage

```python
from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion("command-nightly", messages)
```

More details 👉
* [Completion() function details](./completion/)
* [Supported models / providers](./providers/)

## streaming

Same example from before. Just pass in `stream=True` in the completion args.
```python
from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)

# cohere call
response = completion("command-nightly", messages, stream=True)
```

More details 👉
* [streaming + async](./completion/stream.md)
* [tutorial for streaming Llama2 on TogetherAI](./tutorials/TogetherAI_liteLLM.md)

## exception handling

LiteLLM maps exceptions across all supported providers to the OpenAI exceptions. All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.

```python
from openai.errors import AuthenticationError
from litellm import completion

os.environ["ANTHROPIC_API_KEY"] = "bad-key"
try:
# some code
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
except AuthenticationError as e:
print(e.llm_provider)
```

More details 👉
* [exception mapping](./exception_mapping.md)
* [retries + model fallbacks for completion()](./completion/reliable_completions.md)
* [tutorial for model fallbacks with completion()](./tutorials/fallbacks.md)
30 changes: 24 additions & 6 deletions docs/my-website/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
displayed_sidebar: tutorialSidebar
---
# Litellm
# LiteLLM

import CrispChat from '../src/components/CrispChat.js'

[![PyPI Version](https://img.shields.io/pypi/v/litellm.svg)](https://pypi.org/project/litellm/)
[![PyPI Version](https://img.shields.io/badge/stable%20version-v0.1.345-blue?color=green&link=https://pypi.org/project/litellm/0.1.1/)](https://pypi.org/project/litellm/0.1.1/)
[![PyPI Version](https://img.shields.io/badge/stable%20version-v0.1.583-blue?color=green&link=https://pypi.org/project/litellm/0.1.1/)](https://pypi.org/project/litellm/0.1.1/)
[![CircleCI](https://dl.circleci.com/status-badge/img/gh/BerriAI/litellm/tree/main.svg?style=svg)](https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main)
![Downloads](https://img.shields.io/pypi/dm/litellm)
[![litellm](https://img.shields.io/badge/%20%F0%9F%9A%85%20liteLLM-OpenAI%7CAzure%7CAnthropic%7CPalm%7CCohere%7CReplicate%7CHugging%20Face-blue?color=green)](https://github.com/BerriAI/litellm)
Expand All @@ -23,7 +23,7 @@ a light package to simplify calling OpenAI, Azure, Cohere, Anthropic, Huggingfac

<a href='https://docs.litellm.ai/docs/providers' target="_blank"><img alt='None' src='https://img.shields.io/badge/Supported_LLMs-100000?style=for-the-badge&logo=None&logoColor=000000&labelColor=000000&color=8400EA'/></a>

Demo - https://litellm.ai/playground \
Demo - https://litellm.ai/playground
Read the docs - https://docs.litellm.ai/docs/

## quick start
Expand All @@ -42,10 +42,28 @@ Stable version
pip install litellm==0.1.345
```

## Streaming Queries
## usage

liteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models
```python
from litellm import completion

## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion("command-nightly", messages)
```

## streaming

LiteLLM supports streaming the model response back, pass `stream=True` to get a streaming iterator in response.
Streaming is supported for all models.

```python
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
Expand Down
9 changes: 8 additions & 1 deletion docs/my-website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,16 @@ const sidebars = {
// But you can create a sidebar manually
tutorialSidebar: [
{ type: "doc", id: "index" }, // NEW
"tutorials/model_fallbacks",
"getting_started",
{
type: "category",
label: "Completion()",
link: {
type: 'generated-index',
title: 'Completion()',
description: 'Details on the completion() function',
slug: '/completion',
},
items: [
"completion/input",
"completion/output",
Expand Down Expand Up @@ -82,6 +88,7 @@ const sidebars = {
'tutorials/litellm_Test_Multiple_Providers',
"tutorials/first_playground",
'tutorials/compare_llms',
"tutorials/model_fallbacks",
],
},
{
Expand Down

0 comments on commit 670f51e

Please sign in to comment.