Skip to content

Commit

Permalink
Polish
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 committed Jul 19, 2023
1 parent 57fe218 commit 62c5a72
Show file tree
Hide file tree
Showing 2 changed files with 62 additions and 2 deletions.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Split code
# Split code and markup

CodeTextSplitter allows you to split your code with multiple language support.
CodeTextSplitter allows you to split your code and markup with support for multiple languages.

import Example from "@snippets/modules/data_connection/document_transformers/text_splitters/code_splitter.mdx"

Expand Down
60 changes: 60 additions & 0 deletions docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import CodeBlock from "@theme/CodeBlock";

```typescript
import { OpenAI } from "langchain/llms/openai";

Expand Down Expand Up @@ -42,3 +44,61 @@ console.log(res2);
"\n\nWhy did the chicken cross the road?\n\nTo get to the other side."
*/
```

## Caching with Momento

LangChain also provides a Momento-based cache. [Momento](https://gomomento.com) is a distributed, serverless cache that requires zero setup or infrastructure maintenance. To use it, you'll need to install the `@gomomento/sdk` package:

```bash npm2yarn
npm install @gomomento/sdk
```

Next you'll need to sign up and create an API key. Once you've done that, pass a `cache` option when you instantiate the LLM like this:

import MomentoCacheExample from "@examples/cache/momento.ts";

<CodeBlock language="typescript">{MomentoCacheExample}</CodeBlock>

## Caching with Redis

LangChain also provides a Redis-based cache. This is useful if you want to share the cache across multiple processes or servers. To use it, you'll need to install the `redis` package:

```bash npm2yarn
npm install redis
```

Then, you can pass a `cache` option when you instantiate the LLM. For example:

```typescript
import { OpenAI } from "langchain/llms/openai";
import { RedisCache } from "langchain/cache/redis";
import { createClient } from "redis";

// See https://github.com/redis/node-redis for connection options
const client = createClient();
const cache = new RedisCache(client);

const model = new OpenAI({ cache });
```

## Caching with Upstash Redis

LangChain also provides an Upstash Redis-based cache. Like the Redis-based cache, this cache is useful if you want to share the cache across multiple processes or servers. The Upstash Redis client uses HTTP and supports edge environments. To use it, you'll need to install the `@upstash/redis` package:

```bash npm2yarn
npm install @upstash/redis
```

You'll also need an [Upstash account](https://docs.upstash.com/redis#create-account) and a [Redis database](https://docs.upstash.com/redis#create-a-database) to connect to. Once you've done that, retrieve your REST URL and REST token.

Then, you can pass a `cache` option when you instantiate the LLM. For example:

import UpstashRedisCacheExample from "@examples/cache/upstash_redis.ts";

<CodeBlock language="typescript">{UpstashRedisCacheExample}</CodeBlock>

You can also directly pass in a previously created [@upstash/redis](https://docs.upstash.com/redis/sdks/javascriptsdk/overview) client instance:

import AdvancedUpstashRedisCacheExample from "@examples/cache/upstash_redis_advanced.ts";

<CodeBlock language="typescript">{AdvancedUpstashRedisCacheExample}</CodeBlock>

0 comments on commit 62c5a72

Please sign in to comment.