Skip to content

Commit

Permalink
Some improvements to the docs (langchain-ai#176)
Browse files Browse the repository at this point in the history
* Improve getting started instructions

* Add llm overview doc

* Improve order of prompts docs
  • Loading branch information
nfcampos authored Mar 1, 2023
1 parent 5f544cb commit 6148be9
Show file tree
Hide file tree
Showing 5 changed files with 73 additions and 3 deletions.
24 changes: 21 additions & 3 deletions docs/docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,32 @@ To get started, install LangChain with the following command:
npm i langchain
```

We currently support LangChain on Node.js 16, 18, and 19. Go [here](https://github.com/hwchase17/langchainjs/discussions/152) to vote on the next environment we should support.

### Node.js 16

If you are running this on Node.js 16, either:

- run your application with `NODE_OPTIONS='--experimental-fetch' node ...`, or
- install `node-fetch` and follow the instructions [here](https://github.com/node-fetch/node-fetch#providing-global-access)

## Loading the library
If you are running this on Node.js 18 or 19, you do not need to do anything.

### TypeScript

We support LangChain on Node.js 16, 18, and 19.
We suggest updating your `tsconfig.json` to include the following:

```json
{
"compilerOptions": {
...
"target": "ES2020", // or higher
"module": "nodenext",
}
}
```

## Loading the library

### ESM in Node.js

Expand All @@ -39,7 +57,7 @@ const { OpenAI } = await import("langchain");

LangChain currently supports only Node.js-based environments. This includes Vercel Serverless functions (but not Edge functions), as well as other serverless environments, like AWS Lambda and Google Cloud Functions.

We currently do not support running LangChain in the browser. We are listening to the community on additional environments that we should support. Please open an issue if you would like to see support for a specific environment.
We currently do not support running LangChain in the browser. We are listening to the community on additional environments that we should support. Go [here](https://github.com/hwchase17/langchainjs/discussions/152) to vote and discuss the next environments we should support.

Please see [Deployment](./deployment.md) for more information on deploying LangChain applications.

Expand Down
40 changes: 40 additions & 0 deletions docs/docs/modules/llms/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
sidebar_position: 1
---

# LLM Overview

Large Language Models (LLMs) are a core component of LangChain. LangChain is not a provider of LLMs, but rather provides a standard interface through which you can interact with a variety of LLMs.

See the documentation for each LLM on the left sidebar for more information on how to use them.

## Caching

LangChain provides an optional caching layer for LLMs. This is useful for two reasons:

1. It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
2. It can speed up your application by reducing the number of API calls you make to the LLM provider.

Currently, the cache is stored in-memory. This means that if you restart your application, the cache will be cleared. We're working on adding support for persistent caching.

To enable it you can pass `cache: true` when you instantiate the LLM. For example:

```typescript
import { OpenAI } from "langchain/llms";

const model = new OpenAI({ cache: true });
```

## Dealing with rate limits

Some LLM providers have rate limits. If you exceed the rate limit, you'll get an error. To help you deal with this, LangChain provides a `concurrency` option when instantiating an LLM. This option allows you to specify the maximum number of concurrent requests you want to make to the LLM provider. If you exceed this number, LangChain will automatically queue up your requests to be sent as previous requests complete.

For example, if you set `concurrency: 5`, then LangChain will only send 5 requests to the LLM provider at a time. If you send 10 requests, the first 5 will be sent immediately, and the next 5 will be queued up. Once one of the first 5 requests completes, the next request in the queue will be sent.

To use this feature, simply pass `concurrency: <number>` when you instantiate the LLM. For example:

```typescript
import { OpenAI } from "langchain/llms";

const model = new OpenAI({ concurrency: 5 });
```
4 changes: 4 additions & 0 deletions docs/docs/modules/prompts/load_from_hub.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 3
---

# Load from Hub

[LangChainHub](https://github.com/hwchase17/langchain-hub) contains a collection of prompts which can be loaded directly via LangChain.
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/modules/prompts/partial_prompts.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 2
---

# Partial Prompt Templates

A prompt template is a class with a `.format` method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it can make sense to "partial" a prompt template - eg pass in a subset of the required values, as to create a new prompt template which expects only the remaining subset of values.
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/modules/prompts/prompt_template.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 1
---

# Prompt Templates

This example walks through how to use PromptTemplates. At their core, prompt templates are objects that are made up of a template with certain input variables. This object can then be called with `.format(...)` to format the input variables accordingly.
Expand Down

0 comments on commit 6148be9

Please sign in to comment.