Skip to content

Commit

Permalink
Update streaming examples to use request callbacks (langchain-ai#1033)
Browse files Browse the repository at this point in the history
* Update streaming examples to use request callbacks

* Add clearer docs for the various ways of using callbacks

* Improve docs for tools

* Fix broken links
  • Loading branch information
nfcampos authored Apr 28, 2023
1 parent 01f3c9f commit 4e7ea75
Show file tree
Hide file tree
Showing 19 changed files with 190 additions and 154 deletions.
10 changes: 0 additions & 10 deletions docs/docs/modules/agents/toolkits/examples/index.mdx

This file was deleted.

File renamed without changes.
File renamed without changes.
40 changes: 30 additions & 10 deletions docs/docs/modules/agents/tools/integrations/index.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
sidebar_label: Integrations
hide_table_of_contents: true
sidebar_position: 1
---

import DocCardList from "@theme/DocCardList";
Expand All @@ -9,13 +10,32 @@ import DocCardList from "@theme/DocCardList";

LangChain provides the following tools you can use out of the box:

- `AWSLambda` - A wrapper around the AWS Lambda API, invoked via the Amazon Web Services Node.js SDK. Useful for invoking serverless functions with any behavior which you need to provide to an Agent.
- `BingSerpAPI` - A wrapper around the Bing Search API. Useful for when you need to answer questions about current events. Input should be a search query.
- `Calculator` - Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.
- `IFTTTWebHook` - A wrapper around the IFTTT Webhook API. Useful for triggering IFTTT actions.
- `JsonListKeys` and `JsonGetValue` - Useful for extracting data from JSON objects. These tools can be used collectively in a `JsonToolkit`.
- `RequestsGet` and `RequestsPost` - Useful for making HTTP requests.
- `SerpAPI` - A wrapper around the SerpAPI API. Useful for when you need to answer questions about current events. Input should be a search query.
- `QuerySqlTool`, `InfoSqlTool`, `ListTablesSqlTool`, and `SqlCheckerTool` - Useful for interacting with SQL databases. Can be used together in a `SqlToolkit`.
- `VectorStoreQATool` - Useful for retrieving relevant text data from a vector store.
- `ZapierNLARunAction` - A wrapper around the Zapier NLP API. Useful for triggering Zapier actions with a natural language input. Best when used in a `ZapierToolkit`.
- [`AWSLambda`][AWSLambda] - A wrapper around the AWS Lambda API, invoked via the Amazon Web Services Node.js SDK. Useful for invoking serverless functions with any behavior which you need to provide to an Agent.
- [`BingSerpAPI`][BingSerpAPI] - A wrapper around the Bing Search API. Useful for when you need to answer questions about current events. Input should be a search query.
- [`Calculator`][Calculator] - Useful for getting the result of a math expression. The input to this tool should be a valid mathematical expression that could be executed by a simple calculator.
- [`IFTTTWebHook`][IFTTTWebHook] - A wrapper around the IFTTT Webhook API. Useful for triggering IFTTT actions.
- [`JsonListKeysTool`][JsonListKeysTool] and [`JsonGetValueTool`][JsonGetValueTool] - Useful for extracting data from JSON objects. These tools can be used collectively in a [`JsonToolkit`][JsonToolkit].
- [`RequestsGetTool`][RequestsGetTool] and [`RequestsPostTool`][RequestsPostTool] - Useful for making HTTP requests.
- [`SerpAPI`][SerpAPI] - A wrapper around the SerpAPI API. Useful for when you need to answer questions about current events. Input should be a search query.
- [`QuerySqlTool`][QuerySqlTool], [`InfoSqlTool`][InfoSqlTool], [`ListTablesSqlTool`][ListTablesSqlTool], and [`QueryCheckerTool`][QueryCheckerTool] - Useful for interacting with SQL databases. Can be used together in a [`SqlToolkit`][SqlToolkit].
- [`VectorStoreQATool`][VectorStoreQATool] - Useful for retrieving relevant text data from a vector store.
- [`ZapierNLARunAction`][ZapierNLARunAction] - A wrapper around the Zapier NLP API. Useful for triggering Zapier actions with a natural language input. Best when used in a [`ZapierToolkit`][ZapierToolkit].

[AWSLambda]: /docs/api/tools_aws_lambda/classes/AWSLambda
[BingSerpAPI]: /docs/api/tools/classes/BingSerpAPI
[Calculator]: /docs/api/tools_calculator/classes/Calculator
[IFTTTWebHook]: /docs/api/tools/classes/IFTTTWebHook
[JsonListKeysTool]: /docs/api/tools/classes/JsonListKeysTool
[JsonGetValueTool]: /docs/api/tools/classes/JsonGetValueTool
[JsonToolkit]: ../../toolkits/json
[RequestsGetTool]: /docs/api/tools/classes/RequestsGetTool
[RequestsPostTool]: /docs/api/tools/classes/RequestsPostTool
[SerpAPI]: /docs/api/tools/classes/SerpAPI
[QuerySqlTool]: /docs/api/tools/classes/QuerySqlTool
[InfoSqlTool]: /docs/api/tools/classes/InfoSqlTool
[ListTablesSqlTool]: /docs/api/tools/classes/ListTablesSqlTool
[QueryCheckerTool]: /docs/api/tools/classes/QueryCheckerTool
[SqlToolkit]: ../../toolkits/sql
[VectorStoreQATool]: /docs/api/tools/classes/VectorStoreQATool
[ZapierNLARunAction]: /docs/api/tools/classes/ZapierNLARunAction
[ZapierToolkit]: ../zapier_agent
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import CodeBlock from "@theme/CodeBlock";

# Agent with Zapier NLA Integration

Full docs here: https://nla.zapier.com/api/v1/dynamic/docs
Expand All @@ -18,35 +20,6 @@ This quick start will focus on the server-side use case for brevity. Review full

The example below demonstrates how to use the Zapier integration as an Agent:

```typescript
import { OpenAI } from "langchain/llms/openai";
import {
initializeAgentExecutorWithOptions,
ZapierToolKit,
} from "langchain/agents";
import { ZapierNLAWrapper } from "langchain/tools";

export const run = async () => {
const model = new OpenAI({ temperature: 0 });
const zapier = new ZapierNLAWrapper();
const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);

const executor = await initializeAgentExecutorWithOptions(
toolkit.tools,
model,
{
agentType: "zero-shot-react-description",
verbose: true,
}
);
console.log("Loaded agent.");

const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;

console.log(`Executing with input "${input}"...`);

const result = await executor.call({ input });

console.log(`Got output ${result.output}`);
};
```
import Example from "@examples/agents/zapier_mrkl.ts";

<CodeBlock language="typescript">{Example}</CodeBlock>
25 changes: 22 additions & 3 deletions docs/docs/production/callbacks/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,32 @@ import DocCardList from "@theme/DocCardList";

## How to use callbacks

The `callbacks` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places:
The `callbacks` argument is available on most objects throughout the API ([Chains](../../modules/chains/), [Models](../../modules/models/), [Tools](../../modules/agents/tools/), [Agents](../../modules/agents/agents/), etc.) in two different places:

- **Constructor callbacks**: defined in the constructor, eg. `new LLMChain({ callbacks: [handler] })`, which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the `LLMChain` constructor, it will not be used by the Model attached to that chain.
- **Request callbacks**: defined in the `call()`/`run()`/`apply()` methods used for issuing a request, eg. `chain.call({ input: '...' }, [handler])`, which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the `call()` method).
### Constructor callbacks

Defined in the constructor, eg. `new LLMChain({ callbacks: [handler] })`, which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the `LLMChain` constructor, it will not be used by the Model attached to that chain.

import ConstructorExample from "@examples/callbacks/docs_constructor_callbacks.ts";

<CodeBlock language="typescript">{ConstructorExample}</CodeBlock>

### Request callbacks

Defined in the `call()`/`run()`/`apply()` methods used for issuing a request, eg. `chain.call({ input: '...' }, [handler])`, which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the `call()` method).

import RequestExample from "@examples/callbacks/docs_request_callbacks.ts";

<CodeBlock language="typescript">{RequestExample}</CodeBlock>

### Verbose mode

The `verbose` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. `new LLMChain({ verbose: true })`, and it is equivalent to passing a `ConsoleCallbackHandler` to the `callbacks` argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.

import VerboseExample from "@examples/callbacks/docs_verbose.ts";

<CodeBlock language="typescript">{VerboseExample}</CodeBlock>

### When do you want to use each of these?

- Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are _not specific to a single request_, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/use_cases/api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ Agents are more complex, and involve multiple queries to the LLM to understand w
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger and more complex schemas.

- [OpenAPI Agent](../modules/agents/toolkits/examples/openapi.md)
- [OpenAPI Agent](../modules/agents/toolkits/openapi.md)
2 changes: 1 addition & 1 deletion docs/docs/use_cases/question_answering.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ For an overview of these chains (and more) see the below documentation.
If you want to be able to answer more complex, multi-hop questions you should look into combining your indexes with an agent.
For an example of how to do that, please see the below.

- [Vectorstore Agent](../modules/agents/toolkits/examples/vectorstore.md)
- [Vectorstore Agent](../modules/agents/toolkits/vectorstore.md)
2 changes: 1 addition & 1 deletion docs/docs/use_cases/tabular.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ Agents are more complex, and involve multiple queries to the LLM to understand w
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger databases and more complex schemas.

- [SQL Agent](../modules/agents/toolkits/examples/sql.mdx)
- [SQL Agent](../modules/agents/toolkits/sql.mdx)
36 changes: 17 additions & 19 deletions examples/src/agents/zapier_mrkl.ts
Original file line number Diff line number Diff line change
@@ -1,30 +1,28 @@
import { OpenAI } from "langchain/llms/openai";
import { ZapierNLAWrapper } from "langchain/tools";
import {
initializeAgentExecutorWithOptions,
ZapierToolKit,
} from "langchain/agents";
import { ZapierNLAWrapper } from "langchain/tools";

export const run = async () => {
const model = new OpenAI({ temperature: 0 });
const zapier = new ZapierNLAWrapper();
const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);
const model = new OpenAI({ temperature: 0 });
const zapier = new ZapierNLAWrapper();
const toolkit = await ZapierToolKit.fromZapierNLAWrapper(zapier);

const executor = await initializeAgentExecutorWithOptions(
toolkit.tools,
model,
{
agentType: "zero-shot-react-description",
verbose: true,
}
);
console.log("Loaded agent.");
const executor = await initializeAgentExecutorWithOptions(
toolkit.tools,
model,
{
agentType: "zero-shot-react-description",
verbose: true,
}
);
console.log("Loaded agent.");

const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;
const input = `Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier Slack channel.`;

console.log(`Executing with input "${input}"...`);
console.log(`Executing with input "${input}"...`);

const result = await executor.call({ input });
const result = await executor.call({ input });

console.log(`Got output ${result.output}`);
};
console.log(`Got output ${result.output}`);
8 changes: 8 additions & 0 deletions examples/src/callbacks/docs_constructor_callbacks.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
import { ConsoleCallbackHandler } from "langchain/callbacks";
import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
temperature: 0,
// This handler will be used for all calls made with this LLM.
callbacks: [new ConsoleCallbackHandler()],
});
11 changes: 11 additions & 0 deletions examples/src/callbacks/docs_request_callbacks.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
import { ConsoleCallbackHandler } from "langchain/callbacks";
import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
temperature: 0,
});

// This handler will be used only for this call.
const response = await llm.call("1 + 1 =", undefined, [
new ConsoleCallbackHandler(),
]);
10 changes: 10 additions & 0 deletions examples/src/callbacks/docs_verbose.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";
import { OpenAI } from "langchain/llms/openai";

const chain = new LLMChain({
llm: new OpenAI({ temperature: 0 }),
prompt: PromptTemplate.fromTemplate("Hello, world!"),
// This will enable logging of all Chain *and* LLM events to the console.
verbose: true,
});
69 changes: 35 additions & 34 deletions examples/src/models/chat/chat_streaming.ts
Original file line number Diff line number Diff line change
@@ -1,39 +1,40 @@
import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanChatMessage } from "langchain/schema";

export const run = async () => {
const chat = new ChatOpenAI({
maxTokens: 25,
streaming: true,
callbacks: [
{
handleLLMNewToken(token: string) {
console.log({ token });
},
},
],
});
const chat = new ChatOpenAI({
maxTokens: 25,
streaming: true,
});

const response = await chat.call([new HumanChatMessage("Tell me a joke.")]);
const response = await chat.call(
[new HumanChatMessage("Tell me a joke.")],
undefined,
[
{
handleLLMNewToken(token: string) {
console.log({ token });
},
},
]
);

console.log(response);
// { token: '' }
// { token: '\n\n' }
// { token: 'Why' }
// { token: ' don' }
// { token: "'t" }
// { token: ' scientists' }
// { token: ' trust' }
// { token: ' atoms' }
// { token: '?\n\n' }
// { token: 'Because' }
// { token: ' they' }
// { token: ' make' }
// { token: ' up' }
// { token: ' everything' }
// { token: '.' }
// { token: '' }
// AIChatMessage {
// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything."
// }
};
console.log(response);
// { token: '' }
// { token: '\n\n' }
// { token: 'Why' }
// { token: ' don' }
// { token: "'t" }
// { token: ' scientists' }
// { token: ' trust' }
// { token: ' atoms' }
// { token: '?\n\n' }
// { token: 'Because' }
// { token: ' they' }
// { token: ' make' }
// { token: ' up' }
// { token: ' everything' }
// { token: '.' }
// { token: '' }
// AIChatMessage {
// text: "\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything."
// }
Loading

0 comments on commit 4e7ea75

Please sign in to comment.