Skip to content

Commit

Permalink
Added implementation for Azure OpenAI (langchain-ai#966)
Browse files Browse the repository at this point in the history
* added support for Azure OpenAI. Also added other Azure OpenAI parameters to make experience better

* added azure openai to docs and samples

* fixed linting and formatting

* Added tests for exports. merged all different openai input types into a more general types definition

* fixed rebase issues

* cleanup

* Cleanup. Also added two new settings/envvars for azure, so different deployments can be used for chat, completion and embeddings.

* Update embeddings

* Add missing exports

* Do not use types that arent exported

---------

Co-authored-by: Nuno Campos <[email protected]>
  • Loading branch information
dersia and nfcampos authored May 5, 2023
1 parent bc002b9 commit 3495a79
Show file tree
Hide file tree
Showing 18 changed files with 757 additions and 279 deletions.
8 changes: 6 additions & 2 deletions docs/docs/getting-started/guide-chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,9 @@ import { HumanChatMessage, SystemChatMessage } from "langchain/schema";
const chat = new ChatOpenAI({ temperature: 0 });
```

Here we create a chat model using the API key stored in the environment variable `OPENAI_API_KEY`. We'll be calling this chat model throughout this section.
Here we create a chat model using the API key stored in the environment variable `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` in case you are using Azure OpenAI. We'll be calling this chat model throughout this section.

> **&#9432;** Note, if you are using Azure OpenAI make sure to also set the environment variables `AZURE_OPENAI_API_INSTANCE_NAME`, `AZURE_OPENAI_API_DEPLOYMENT_NAME` and `AZURE_OPENAI_API_VERSION`.
### Chat Models: Message in, Message out

Expand All @@ -50,7 +52,9 @@ AIChatMessage { text: "J'aime programmer." }

#### Multiple Messages

OpenAI's chat-based models (currently `gpt-3.5-turbo` and `gpt-4`) support multiple messages as input. See [here](https://platform.openai.com/docs/guides/chat/chat-vs-completions) for more information. Here is an example of sending a system and user message to the chat model:
OpenAI's chat-based models (currently `gpt-3.5-turbo` and `gpt-4` and in case of azure OpenAI `gpt-4-32k`) support multiple messages as input. See [here](https://platform.openai.com/docs/guides/chat/chat-vs-completions) for more information. Here is an example of sending a system and user message to the chat model:

> **&#9432;** Note, if you are using Azure OpenAI make sure to change the deployment name to the deployment for the model you choose.
```typescript
const responseB = await chat.call([
Expand Down
58 changes: 49 additions & 9 deletions docs/docs/getting-started/guide-llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,21 +39,61 @@ We will then need to set the environment variable for the OpenAI key. Three opti

1. We can do this by setting the value in a `.env` file and use the [dotenv](https://github.com/motdotla/dotenv) package to read it.

```bash
OPENAI_API_KEY="..."
```
1.1. For OpenAI Api

```bash
OPENAI_API_KEY="..."
```

1.2. For Azure OpenAI:

```bash
AZURE_OPENAI_API_KEY="..."
AZURE_OPENAI_API_INSTANCE_NAME="..."
AZURE_OPENAI_API_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_VERSION="..."
```

2. Or we can export the environment variable with the following command in your shell:

```bash
export OPENAI_API_KEY=sk-....
```
2.1. For OpenAI Api

```bash
export OPENAI_API_KEY=sk-....
```

2.2. For Azure OpenAI:

```bash
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_API_INSTANCE_NAME="..."
export AZURE_OPENAI_API_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_VERSION="..."
```

3. Or we can do it when initializing the wrapper along with other arguments. In this example, we probably want the outputs to be MORE random, so we'll initialize it with a HIGH temperature.

```typescript
const model = new OpenAI({ openAIApiKey: "sk-...", temperature: 0.9 });
```
3.1. For OpenAI Api

```typescript
const model = new OpenAI({ openAIApiKey: "sk-...", temperature: 0.9 });
```

3.2. For Azure OpenAI:

```bash
const model = new OpenAI({
azureOpenAIApiKey: "...",
azureOpenAIApiInstanceName: "....",
azureOpenAIApiDeploymentName: "....",
azureOpenAIApiVersion: "....",
temperature: 0.9
});
```

Once we have initialized the wrapper, we can now call it on some input!

Expand Down
28 changes: 21 additions & 7 deletions docs/docs/modules/indexes/vector_stores/integrations/milvus.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,16 +15,30 @@ Only available on Node.js.
1. Run Milvus instance with Docker on your computer [docs](https://milvus.io/docs/v2.1.x/install_standalone-docker.md)
2. Install the Milvus Node.js SDK.

```bash npm2yarn
npm install -S @zilliz/milvus2-sdk-node
```
```bash npm2yarn
npm install -S @zilliz/milvus2-sdk-node
```

3. Setup Env variables for Milvus before running the code

```bash
export OPENAI_API_KEY=YOUR_OPEN_API_HERE
export MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
```
3.1 OpenAI

```bash
export OPENAI_API_KEY=YOUR_OPENAI_API_KEY_HERE
export MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
```

3.2 Azure OpenAI

```bash
export AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY_HERE
export AZURE_OPENAI_API_INSTANCE_NAME=YOUR_AZURE_OPENAI_INSTANCE_NAME_HERE
export AZURE_OPENAI_API_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_DEPLOYMENT_NAME_HERE
export AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_COMPLETIONS_DEPLOYMENT_NAME_HERE
export AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=YOUR_AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME_HERE
export AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION_HERE
export MILVUS_URL=YOUR_MILVUS_URL_HERE # for example http://localhost:19530
```

## Index and query docs

Expand Down
6 changes: 6 additions & 0 deletions docs/docs/modules/models/chat/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,12 @@ import OpenAI from "@examples/models/chat/integration_openai.ts";

<CodeBlock language="typescript">{OpenAI}</CodeBlock>

## Azure `ChatOpenAI`

import AzureOpenAI from "@examples/models/chat/integration_azure_openai.ts";

<CodeBlock language="typescript">{AzureOpenAI}</CodeBlock>

## `ChatAnthropic`

import Anthropic from "@examples/models/chat/integration_anthropic.ts";
Expand Down
19 changes: 19 additions & 0 deletions docs/docs/modules/models/embeddings/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,25 @@ const embeddings = new OpenAIEmbeddings({
});
```

## Azure `OpenAIEmbeddings`

The `OpenAIEmbeddings` class uses the OpenAI API on Azure to generate embeddings for a given text. By default it strips new line characters from the text, as recommended by OpenAI, but you can disable this by passing `stripNewLines: false` to the constructor.

```typescript
import { OpenAIEmbeddings } from "langchain/embeddings/openai";

const embeddings = new OpenAIEmbeddings({
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
});
```

## `CohereEmbeddings`

The `CohereEmbeddings` class uses the Cohere API to generate embeddings for a given text.
Expand Down
49 changes: 49 additions & 0 deletions docs/docs/modules/models/llms/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,28 @@ const res = await model.call(
console.log({ res });
```

## Azure `OpenAI`

```typescript
import { OpenAI } from "langchain/llms/openai";

const model = new OpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log({ res });
```

## `HuggingFaceInference`

```bash npm2yarn
Expand Down Expand Up @@ -100,6 +122,33 @@ const res = await model.call(
);
```

### Azure `PromptLayerOpenAI`

LangChain integrates with PromptLayer for logging and debugging prompts and responses. To add support for PromptLayer:

1. Create a PromptLayer account here: [https://promptlayer.com](https://promptlayer.com).
2. Create an API token and pass it either as `promptLayerApiKey` argument in the `PromptLayerOpenAI` constructor or in the `PROMPTLAYER_API_KEY` environment variable.

```typescript
import { PromptLayerOpenAI } from "langchain/llms/openai";

const model = new PromptLayerOpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
promptLayerApiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.PROMPTLAYER_API_KEY
});
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
```

The request and the response will be logged in the [PromptLayer dashboard](https://promptlayer.com/home).

> **_Note:_** In streaming mode PromptLayer will not log the response.
7 changes: 7 additions & 0 deletions examples/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@ ANTHROPIC_API_KEY=ADD_YOURS_HERE # https://www.anthropic.com/
COHERE_API_KEY=ADD_YOURS_HERE # https://dashboard.cohere.ai/api-keys
HUGGINGFACEHUB_API_KEY=ADD_YOURS_HERE # https://huggingface.co/settings/tokens
OPENAI_API_KEY=ADD_YOURS_HERE # https://platform.openai.com/account/api-keys
# Azure Portal -> Cognitive Services -> OpenAI -> Choose your instance -> Keys and Endpoint
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_INSTANCE_NAME=ADD_YOURS_HERE # Azure Portal -> Cognitive Services -> OpenAI
AZURE_OPENAI_API_DEPLOYMENT_NAME=ADD_YOURS_HERE # Azure Portal -> Cognitive Services -> OpenAI -> Choose your instance -> Go to Azure OpenAI Studio -> Deployments
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=ADD_YOURS_HERE # Azure Portal -> Cognitive Services -> OpenAI -> Choose your instance -> Go to Azure OpenAI Studio -> Deployments
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=ADD_YOURS_HERE # Azure Portal -> Cognitive Services -> OpenAI -> Choose your instance -> Go to Azure OpenAI Studio -> Deployments
AZURE_OPENAI_API_VERSION=ADD_YOURS_HERE # Azure Portal -> Cognitive Services -> OpenAI -> Choose your instance -> Go to Azure OpenAI Studio -> Completions/Chat -> Choose Deployment -> View Code
OPENSEARCH_URL=ADD_YOURS_HERE # http://127.0.0.1:9200
PINECONE_API_KEY=ADD_YOURS_HERE # https://app.pinecone.io/organizations
PINECONE_ENVIRONMENT=ADD_YOURS_HERE
Expand Down
13 changes: 13 additions & 0 deletions examples/src/models/chat/integration_azure_openai.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import { ChatOpenAI } from "langchain/chat_models/openai";

const model = new ChatOpenAI({
temperature: 0.9,
azureOpenAIApiKey: "YOUR-AOAI-API-KEY", // In Node.js defaults to process.env.AZURE_OPENAI_API_KEY
azureOpenAIApiInstanceName: "YOUR-AOAI-INSTANCE-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_INSTANCE_NAME
azureOpenAIApiDeploymentName: "YOUR-AOAI-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME
azureOpenAIApiCompletionsDeploymentName:
"YOUR-AOAI-COMPLETIONS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME
azureOpenAIApiEmbeddingsDeploymentName:
"YOUR-AOAI-EMBEDDINGS-DEPLOYMENT-NAME", // In Node.js defaults to process.env.AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME
azureOpenAIApiVersion: "YOUR-AOAI-API-VERSION", // In Node.js defaults to process.env.AZURE_OPENAI_API_VERSION
});
16 changes: 15 additions & 1 deletion examples/src/tools/webbrowser.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,22 @@ import { ChatOpenAI } from "langchain/chat_models/openai";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";

export async function run() {
// this will not work with Azure OpenAI API yet
// Azure OpenAI API does not support embedding with multiple inputs yet
// Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.
// So we will fail fast, when Azure OpenAI API is used
if (process.env.AZURE_OPENAI_API_KEY) {
throw new Error(
"Azure OpenAI API does not support embedding with multiple inputs yet"
);
}

const model = new ChatOpenAI({ temperature: 0 });
const embeddings = new OpenAIEmbeddings();
const embeddings = new OpenAIEmbeddings(
process.env.AZURE_OPENAI_API_KEY
? { azureOpenAIApiDeploymentName: "Embeddings2" }
: {}
);

const browser = new WebBrowser({ model, embeddings });

Expand Down
7 changes: 7 additions & 0 deletions langchain/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@ ANTHROPIC_API_KEY=ADD_YOURS_HERE
COHERE_API_KEY=ADD_YOURS_HERE
HUGGINGFACEHUB_API_KEY=ADD_YOURS_HERE
OPENAI_API_KEY=ADD_YOURS_HERE
# if AZURE_OPENAI_API_KEY is set, it OPENAI_API_KEY will be ignored.
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_API_INSTANCE_NAME=ADD_YOURS_HERE
AZURE_OPENAI_API_DEPLOYMENT_NAME=ADD_YOURS_HERE
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME=ADD_YOURS_HERE
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME=ADD_YOURS_HERE
AZURE_OPENAI_API_VERSION=ADD_YOURS_HERE
OPENSEARCH_URL=http://127.0.0.1:9200
PINECONE_API_KEY=ADD_YOURS_HERE
PINECONE_ENVIRONMENT=ADD_YOURS_HERE
Expand Down
Loading

0 comments on commit 3495a79

Please sign in to comment.