Skip to content

Commit

Permalink
Fix docs build
Browse files Browse the repository at this point in the history
  • Loading branch information
jacoblee93 committed Jul 18, 2023
1 parent 5ac0236 commit 1d98a94
Show file tree
Hide file tree
Showing 27 changed files with 72 additions and 122 deletions.
2 changes: 1 addition & 1 deletion docs/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ cp -r {docs_skeleton,snippets} _dist
cp -r extras/ _dist/docs_skeleton/docs
cd _dist/docs_skeleton
yarn install
yarn start
yarn build
4 changes: 2 additions & 2 deletions docs/docs_skeleton/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ import Install from "@snippets/get_started/quickstart/installation.mdx"

<Install/>

For more details, see our [Installation guide](/docs/get_started/installation.html).
For more details, see our [Installation guide](/docs/get_started/installation).

## Environment setup

Expand Down Expand Up @@ -108,7 +108,7 @@ Agents do just this: they use a language model to determine which actions to tak
To load an agent, you need to choose a(n):
- LLM/Chat model: The language model powering the agent.
- Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the [Tools documentation](/docs/modules/agents/tools/).
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents/how_to/custom_agent.html). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/).
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/).

For this example, we'll be using SerpAPI to query a search engine.

Expand Down
8 changes: 4 additions & 4 deletions docs/docs_skeleton/docs/modules/agents/agent_types/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,25 +10,25 @@ Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user.
Here are the agents available in LangChain.

### [Zero-shot ReAct](/docs/modules/agents/agent_types/react.html)
### [Zero-shot ReAct](/docs/modules/agents/agent_types/react)

This agent uses the [ReAct](https://arxiv.org/pdf/2205.00445.pdf) framework to determine which tool to use
based solely on the tool's description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.

**Note**: This is the most general purpose action agent.

### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent)

Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
function should to be called and respond with the inputs that should be passed to the function.
The OpenAI Functions Agent is designed to work with these models.

### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent)

This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.

## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute)
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
2 changes: 1 addition & 1 deletion docs/docs_skeleton/docs/modules/agents/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ At a high-level a plan-and-execute agent:
2. Plans the full sequence of steps to take
3. Executes the steps in order, passing the outputs of past steps as inputs to future steps

The most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more [here](/docs/modules/agents/agent_types/plan_and_execute.html).
The most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more [here](/docs/modules/agents/agent_types/plan_and_execute).

## Get started

Expand Down
2 changes: 1 addition & 1 deletion docs/docs_skeleton/src/pages/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,5 @@ import React from "react";
import { Redirect } from "@docusaurus/router";

export default function Home() {
return <Redirect to="docs/get_started/introduction.html" />;
return <Redirect to="docs/get_started/introduction" />;
}
2 changes: 1 addition & 1 deletion docs/extras/ecosystem/integrations/unstructured.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import SingleExample from "@examples/document_loaders/unstructured.ts";

## Directories

You can also load all of the files in the directory using `UnstructuredDirectoryLoader`, which inherits from [`DirectoryLoader`](../modules/data_connection/document_loaders/integrations/file_loaders/directory):
You can also load all of the files in the directory using `UnstructuredDirectoryLoader`, which inherits from [`DirectoryLoader`](/docs/modules/data_connection/document_loaders/integrations/file_loaders/directory):

import DirectoryExample from "@examples/document_loaders/unstructured_directory.ts";

Expand Down
2 changes: 1 addition & 1 deletion docs/extras/modules/agents/how_to/callbacks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@ import CallbacksExample from "@examples/agents/agent_callbacks.ts";

You can subscribe to a number of events that are emitted by the Agent and the underlying tools, chains and models via callbacks.

For more info on the events available see the [Callbacks](/docs/production/callbacks/) section of the docs.
For more info on the events available see the [Callbacks](/docs/modules/callbacks/) section of the docs.

<CodeBlock language="typescript">{CallbacksExample}</CodeBlock>
2 changes: 1 addition & 1 deletion docs/extras/modules/agents/tools/integrations/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ LangChain provides the following tools you can use out of the box:
[WikipediaQueryRun]: /docs/api/tools/classes/WikipediaQueryRun
[VectorStoreQATool]: /docs/api/tools/classes/VectorStoreQATool
[ZapierNLARunAction]: /docs/api/tools/classes/ZapierNLARunAction
[ZapierToolkit]: ../zapier_agent
[ZapierToolkit]: ./zapier_agent
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ LangChain is designed to be extensible. You can add your own custom Chains and A

## Adding callbacks to custom Chains

When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains. See this guide for more information on how to [create custom chains and use callbacks inside them](../../modules/chains#subclassing-basechain).
When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains. See this guide for more information on how to [create custom chains and use callbacks inside them(/docs/modules/chains#subclassing-basechain).
4 changes: 2 additions & 2 deletions docs/extras/modules/chains/popular/structured_output.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Must be used with an [OpenAI functions](https://platform.openai.com/docs/guides/
This chain leverages OpenAI functions to output objects that match a given format for any given prompt.
It converts input schema into an OpenAI function, then forces OpenAI to call that function to return a response in the correct format.

You can use it where you would use a chain with a [`StructuredOutputParser`](../../prompts/output_parsers) without any special
You can use it where you would use a chain with a [`StructuredOutputParser`](/docs/modules/model_io/output_parsers) without any special
instructions stuffed into the prompt. It will also more reliably output structured results with higher `temperature` values, making it better suited
for more creative applications.

Expand All @@ -35,5 +35,5 @@ import GenerateExample from "@examples/chains/openai_functions_structured_genera

### Customization

This chain takes all the same arguments as a standard [`LLMChain`](../llm_chain) minus an `outputParser`. It will also be created with a default model set to `gpt-3.5-turbo-0613`,
This chain takes all the same arguments as a standard [`LLMChain`](/docs/modules/chains/foundational/llm_chain) minus an `outputParser`. It will also be created with a default model set to `gpt-3.5-turbo-0613`,
but you can pass an options parameter into the input parameters with a pre-created `ChatOpenAI` instance as `llm`.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ hide_table_of_contents: true

# Unstructured

This example covers how to use [Unstructured](docs/ecosystem/unstructured) to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.
This example covers how to use [Unstructured](docs/ecosystem/integrations/unstructured) to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.

## Setup

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ hide_table_of_contents: true

# Vector Store

Once you've created a [Vector Store](../vector_stores/), the way to use it as a Retriever is very simple:
Once you've created a [Vector Store](/docs/modules/data_connection/vectorstores), the way to use it as a Retriever is very simple:

```typescript
vectorStore = ...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@ import DebuggingExample from "@examples/models/llm/llm_debugging.ts";

Especially when using an agent, there can be a lot of back-and-forth going on behind the scenes as a LLM processes a prompt. For agents, the response object contains an intermediateSteps object that you can print to see an overview of the steps it took to get there. If that's not enough and you want to see every exchange with the LLM, you can pass callbacks to the LLM for custom logging (or anything else you want to do) as the model goes through the steps:

For more info on the events available see the [Callbacks](/docs/production/callbacks/) section of the docs.
For more info on the events available see the [Callbacks](/docs/modules/callbacks/) section of the docs.

<CodeBlock language="typescript">{DebuggingExample}</CodeBlock>
2 changes: 1 addition & 1 deletion docs/extras/use_cases/summarization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ So what do you do then?

To get started, we would recommend checking out the summarization chain which attacks this problem in a recursive manner.

- [Summarization Chain](../modules/chains/additional/summarization)
- [Summarization Chain](/docs/modules/chains/popular/summarize)
3 changes: 2 additions & 1 deletion docs/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@
"private": true,
"scripts": {
"build": "bash ./build.sh",
"dev": "yarn build & nodemon -e mdx,md --exec \"bash\" ./nodemon.sh & wait"
"start": "bash ./start.sh",
"dev": "yarn build && yarn start & nodemon -e mdx,md --exec \"bash\" ./nodemon.sh & wait"
},
"devDependencies": {
"nodemon": "^3.0.1"
Expand Down
12 changes: 6 additions & 6 deletions docs/snippets/get_started/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -123,12 +123,12 @@ import { OpenAI } from "langchain/llms/openai";

This applies to all imports from the following 6 modules, which have been split into submodules for each integration. The combined modules are deprecated, do not work outside of Node.js, and will be removed in a future version.

- If you were using `langchain/llms`, see [LLMs](../modules/models/llms/integrations) for updated import paths.
- If you were using `langchain/chat_models`, see [Chat Models](../modules/models/chat/integrations) for updated import paths.
- If you were using `langchain/embeddings`, see [Embeddings](../modules/models/embeddings/integrations) for updated import paths.
- If you were using `langchain/vectorstores`, see [Vector Stores](../modules/indexes/vector_stores/integrations/) for updated import paths.
- If you were using `langchain/document_loaders`, see [Document Loaders](../modules/indexes/document_loaders/examples/) for updated import paths.
- If you were using `langchain/retrievers`, see [Retrievers](../modules/indexes/retrievers/) for updated import paths.
- If you were using `langchain/llms`, see [LLMs](/docs/modules/model_io/models/llms) for updated import paths.
- If you were using `langchain/chat_models`, see [Chat Models](/docs/modules/model_io/models/chat) for updated import paths.
- If you were using `langchain/embeddings`, see [Embeddings](/docs/modules/data_connection/text_embedding) for updated import paths.
- If you were using `langchain/vectorstores`, see [Vector Stores](/docs/modules/data_connection/vectorstores) for updated import paths.
- If you were using `langchain/document_loaders`, see [Document Loaders](/docs/modules/data_connection/document_loaders) for updated import paths.
- If you were using `langchain/retrievers`, see [Retrievers](/docs/modules/data_connection/retrievers) for updated import paths.

Other modules are not affected by this change, and you can continue to import them from the same path.

Expand Down
14 changes: 4 additions & 10 deletions docs/snippets/get_started/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ Off-the-shelf chains make it easy to get started. For more complex applications

## Get started

[Here’s](/docs/get_started/installation.html) how to install LangChain, set up your environment, and start building.
[Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.

We recommend following our [Quickstart](/docs/get_started/quickstart.html) guide to familiarize yourself with the framework by building your first LangChain application.
We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.

_**Note**: These docs are for the LangChain [JS/TS package](https://github.com/hwchase17/langchainjs). For documentation on [the Python version](https://github.com/hwchase17/langchain), [head here](https://python.langchain.com/docs)._

Expand All @@ -38,17 +38,11 @@ Log and stream intermediate steps of any chain
Walkthroughs and best-practices for common end-to-end use cases, like:
- [Chatbots](/docs/use_cases/chatbots/)
- [Answering questions using sources](/docs/use_cases/question_answering/)
- [Analyzing structured data](/docs/use_cases/tabular.html)
- [Analyzing structured data](/docs/use_cases/tabular)
- and much more...

### [Guides](/docs/guides/)
Learn best practices for developing with LangChain.

### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).

### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out the [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).

<h3><span style={{color:"#2e8555"}}> Support </span></h3>

Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/modules/agents/tools/get_started.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Specifically, the interface of a tool has a single text input and a single text output. It includes a name and description that communicate to the [Model](../../models/) what the tool does and when to use it.
Specifically, the interface of a tool has a single text input and a single text output. It includes a name and description that communicate to the model what the tool does and when to use it.

```typescript
interface Tool {
Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/modules/chains/popular/chat_vector_db.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ import ConvoQAStreamingExample from "@examples/chains/conversational_qa_streamin
## Externally-Managed Memory

For this chain, if you'd like to format the chat history in a custom way (or pass in chat messages directly for convenience), you can also pass the chat history in explicitly by omitting the `memory` option and supplying
a `chat_history` string or array of [HumanMessages](../../schema/chat-messages) and [AIMessages](../../schema/chat-messages) directly into the `chain.call` method:
a `chat_history` string or array of [HumanMessages](/docs/api/schema/classes/HumanMessage) and [AIMessages(/docs/api/schema/classes/AIMessage) directly into the `chain.call` method:

import ConvoQAExternalMemoryExample from "@examples/chains/conversational_qa_external_memory.ts";

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ interface VectorStore {
}
```

You can create a vector store from a list of [Documents](../../schema/document), or from a list of texts and their corresponding metadata. You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.
You can create a vector store from a list of [Documents](/docs/api/document/classes/Document), or from a list of texts and their corresponding metadata. You can also create a vector store from an existing index, the signature of this method depends on the vector store you're using, check the documentation of the vector store you're interested in.

```typescript
abstract class BaseVectorStore implements VectorStore {
Expand Down
2 changes: 1 addition & 1 deletion docs/snippets/modules/memory/get_started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ To implement your own memory class you have two options:

### Subclassing `BaseChatMemory`

This is the easiest way to implement your own memory class. You can subclass `BaseChatMemory`, which takes care of `saveContext` by saving inputs and outputs as [Chat Messages](../schema/chat-messages.md), and implement only the `loadMemoryVariables` method. This method is responsible for returning the memory variables that are relevant for the current input values.
This is the easiest way to implement your own memory class. You can subclass `BaseChatMemory`, which takes care of `saveContext` by saving inputs and outputs as [Chat Messages](/docs/api/schema/classes/BaseMessage), and implement only the `loadMemoryVariables` method. This method is responsible for returning the memory variables that are relevant for the current input values.

```typescript
abstract class BaseChatMemory extends BaseMemory {
Expand Down
16 changes: 4 additions & 12 deletions docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,35 +18,27 @@ The default cache is stored in-memory. This means that if you restart your appli
// The first time, it is not yet in cache, so it should take longer
const res = await model.predict("Tell me a joke");
console.log(res);
```

<CodeOutputBlock lang="typescript">

```
/*
CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms
Wall time: 4.83 s
"\n\nWhy did the chicken cross the road?\n\nTo get to the other side."
*/
```

</CodeOutputBlock>


```typescript
// The second time it is, so it goes faster
const res2 = await model.predict("Tell me a joke");
console.log(res2);
```

<CodeOutputBlock lang="typescript">

```
/*
CPU times: user 238 µs, sys: 143 µs, total: 381 µs
Wall time: 1.76 ms
"\n\nWhy did the chicken cross the road?\n\nTo get to the other side."
*/
```

</CodeOutputBlock>
Original file line number Diff line number Diff line change
Expand Up @@ -6,20 +6,3 @@ import StreamingExample from "@examples/models/llm/llm_streaming.ts";
<CodeBlock language="typescript">{StreamingExample}</CodeBlock>

We still have access to the end `LLMResult` if using `generate`. However, `token_usage` is not currently supported for streaming.


```python
llm.generate(["Tell me a joke."])
```

<CodeOutputBlock lang="python">

```
Q: What did the fish say when it hit the wall?
A: Dam!
LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when it hit the wall?\nA: Dam!', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {}, 'model_name': 'text-davinci-003'})
```

</CodeOutputBlock>
Loading

0 comments on commit 1d98a94

Please sign in to comment.