-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add basic ai/core docs (vercel#1240)
- Loading branch information
Showing
16 changed files
with
1,091 additions
and
13 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
{ | ||
"index": { | ||
"title": "Overview", | ||
"theme": { | ||
"breadcrumb": false | ||
} | ||
}, | ||
"prompt": "Prompts", | ||
"settings": "Settings", | ||
"tools": "Tools and Tool Calling", | ||
"openai": "OpenAI Provider", | ||
"mistral": "Mistral Provider", | ||
"generate-text": "generateText API", | ||
"stream-text": "streamText API", | ||
"generate-object": "generateObject API", | ||
"stream-object": "streamObject API" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
--- | ||
title: generateObject API | ||
--- | ||
|
||
import { Callout } from 'nextra-theme-docs'; | ||
|
||
# experimental_generateObject | ||
|
||
Generate a typed, structured object for a given prompt and [Zod](https://zod.dev/) schema using a language model. | ||
|
||
You can use `generateObject` to force the language model to return structured data, e.g. for information extraction, | ||
synthetic data generation, or classification tasks. | ||
|
||
```ts | ||
import { experimental_generateObject } from 'ai'; | ||
|
||
const { object } = await experimental_generateObject({ | ||
model, | ||
schema: z.object({ | ||
recipe: z.object({ | ||
name: z.string(), | ||
ingredients: z.array( | ||
z.object({ | ||
name: z.string(), | ||
amount: z.string(), | ||
}), | ||
), | ||
steps: z.array(z.string()), | ||
}), | ||
}), | ||
prompt: 'Generate a lasagna recipe.', | ||
}); | ||
``` | ||
|
||
<Callout type="info"> | ||
`experimental_generateObject` does not stream the output. If you want to | ||
stream the output, use | ||
[`experimental_streamObject`](/docs/ai-core/stream-object). | ||
</Callout> | ||
|
||
## Parameters | ||
|
||
The parameters are passed into `experimental_streamText` as a single options object. | ||
|
||
- **model** - The language model to use. | ||
- **schema** - A [Zod](https://zod.dev/) schema that describes the expected output structure. | ||
- **mode** - The mode to use for object generation. Not all models support all modes. Defaults to 'auto'. | ||
- **system** - A system message that will be apart of the prompt. | ||
- **prompt** - A simple text prompt. You can either use `prompt` or `messages` but not both. | ||
- **messages** - A list of messages. You can either use `prompt` or `messages` but not both. | ||
- **maxTokens** - Maximum number of tokens to generate. | ||
- **temperature** - Temperature setting. | ||
This is a number between 0 (almost no randomness) and 1 (very random). | ||
It is recommended to set either `temperature` or `topP`, but not both. | ||
- **topP** - Nucleus sampling. This is a number between 0 and 1. | ||
E.g. 0.1 would mean that only tokens with the top 10% probability mass are considered. | ||
It is recommended to set either `temperature` or `topP`, but not both. | ||
- **presencePenalty** - Presence penalty setting. | ||
It affects the likelihood of the model to repeat information that is already in the prompt. | ||
The presence penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). | ||
0 means no penalty. | ||
- **frequencyPenalty** - Frequency penalty setting. | ||
It affects the likelihood of the model to repeatedly use the same words or phrases. | ||
The frequency penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). | ||
0 means no penalty. | ||
- **seed** - The seed (integer) to use for random sampling. | ||
If set and supported by the model, calls will generate deterministic results. | ||
- **maxRetries** - Maximum number of retries. Set to 0 to disable retries. Default: 2. | ||
- **abortSignal** - An optional abort signal that can be used to cancel the call. | ||
|
||
## Return Type | ||
|
||
`generateObject` returns a result object with several properties: | ||
|
||
- **object**: `T` - The generated object. Typed based on the schema parameter. The object is validated using Zod. | ||
- **finishReason**: `stop` | `length` | `content-filter` | `tool-calls` | `error` | `other` - The reason why the generation finished. | ||
- **usage**: `TokenUsage` - The token usage of the generated text. The object contains `promptTokens: number`, `completionTokens: number`, and `totalTokens: number`. | ||
- **warnings**: `Array<Warning> | undefined` - Warnings from the model provider (e.g., unsupported settings). | ||
|
||
## Examples | ||
|
||
### Basic call | ||
|
||
```ts | ||
const { object } = await experimental_generateObject({ | ||
model, | ||
schema: z.object({ | ||
recipe: z.object({ | ||
name: z.string(), | ||
ingredients: z.array( | ||
z.object({ | ||
name: z.string(), | ||
amount: z.string(), | ||
}), | ||
), | ||
steps: z.array(z.string()), | ||
}), | ||
}), | ||
prompt: 'Generate a lasagna recipe.', | ||
}); | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,129 @@ | ||
--- | ||
title: generateText API | ||
--- | ||
|
||
import { Callout } from 'nextra-theme-docs'; | ||
|
||
# experimental_generateText | ||
|
||
Generate a text and call tools for a given prompt using a language model. | ||
|
||
`experimental_generateText` is ideal for non-interactive use cases such as automation tasks where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools. | ||
|
||
```ts | ||
import { experimental_generateText } from 'ai'; | ||
|
||
const { text } = await experimental_generateText({ | ||
model, | ||
prompt: 'Invent a new holiday and describe its traditions.', | ||
}); | ||
``` | ||
|
||
<Callout type="info"> | ||
`experimental_generateText` does not stream the output. If you want to stream | ||
the output, use [`experimental_streamText`](/docs/ai-core/stream-text). | ||
</Callout> | ||
|
||
## Parameters | ||
|
||
The parameters are passed into `experimental_streamText` as a single options object. | ||
|
||
- **model** - The language model to use. | ||
- **tools** - The tools that the model can call. The model needs to support calling tools. | ||
- **system** - A system message that will be apart of the prompt. | ||
- **prompt** - A simple text prompt. You can either use `prompt` or `messages` but not both. | ||
- **messages** - A list of messages. You can either use `prompt` or `messages` but not both. | ||
- **maxTokens** - Maximum number of tokens to generate. | ||
- **temperature** - Temperature setting. | ||
This is a number between 0 (almost no randomness) and 1 (very random). | ||
It is recommended to set either `temperature` or `topP`, but not both. | ||
- **topP** - Nucleus sampling. This is a number between 0 and 1. | ||
E.g. 0.1 would mean that only tokens with the top 10% probability mass are considered. | ||
It is recommended to set either `temperature` or `topP`, but not both. | ||
- **presencePenalty** - Presence penalty setting. | ||
It affects the likelihood of the model to repeat information that is already in the prompt. | ||
The presence penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). | ||
0 means no penalty. | ||
- **frequencyPenalty** - Frequency penalty setting. | ||
It affects the likelihood of the model to repeatedly use the same words or phrases. | ||
The frequency penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). | ||
0 means no penalty. | ||
- **seed** - The seed (integer) to use for random sampling. | ||
If set and supported by the model, calls will generate deterministic results. | ||
- **maxRetries** - Maximum number of retries. Set to 0 to disable retries. Default: 2. | ||
- **abortSignal** - An optional abort signal that can be used to cancel the call. | ||
|
||
## Return Type | ||
|
||
`generateText` returns a result object with several properties: | ||
|
||
- **text**: `string` - The generated text. | ||
- **toolCalls**: `Array<ToolCall>` - The tool calls that were made during the generation. Each tool call has a `toolCallId`, a `toolName`, and a typed `args` object. | ||
- **toolResults**: `Array<ToolResult>` - The results of the tool calls. Each tool result has a `toolCallId`, a `toolName`, a typed `args` object, and a typed `result` object. | ||
- **finishReason**: `stop` | `length` | `content-filter` | `tool-calls` | `error` | `other` - The reason why the generation finished. | ||
- **usage**: `TokenUsage` - The token usage of the generated text. The object contains `promptTokens: number`, `completionTokens: number`, and `totalTokens: number`. | ||
- **warnings**: `Array<Warning> | undefined` - Warnings from the model provider (e.g., unsupported settings). | ||
|
||
## Examples | ||
|
||
### Basic call | ||
|
||
```ts | ||
const { text, usage, finishReason } = await experimental_generateText({ | ||
model, | ||
prompt: 'Invent a new holiday and describe its traditions.', | ||
}); | ||
``` | ||
|
||
### Tool Usage | ||
|
||
```ts | ||
const result = await experimental_generateText({ | ||
model: openai.chat('gpt-3.5-turbo'), | ||
maxTokens: 512, | ||
tools: { | ||
weather: { | ||
description: 'Get the weather in a location', | ||
parameters: z.object({ | ||
location: z.string().describe('The location to get the weather for'), | ||
}), | ||
execute: async ({ location }: { location: string }) => ({ | ||
location, | ||
temperature: 72 + Math.floor(Math.random() * 21) - 10, | ||
}), | ||
}, | ||
cityAttractions: { | ||
parameters: z.object({ city: z.string() }), | ||
}, | ||
}, | ||
prompt: | ||
'What is the weather in San Francisco and what attractions should I visit?', | ||
}); | ||
|
||
// typed tool calls: | ||
for (const toolCall of result.toolCalls) { | ||
switch (toolCall.toolName) { | ||
case 'cityAttractions': { | ||
toolCall.args.city; // string | ||
break; | ||
} | ||
|
||
case 'weather': { | ||
toolCall.args.location; // string | ||
break; | ||
} | ||
} | ||
} | ||
|
||
// typed tool results for tools with execute method: | ||
for (const toolResult of result.toolResults) { | ||
switch (toolResult.toolName) { | ||
case 'weather': { | ||
toolResult.args.location; // string | ||
toolResult.result.location; // string | ||
toolResult.result.temperature; // number | ||
break; | ||
} | ||
} | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
--- | ||
title: AI Core (experimental) | ||
--- | ||
|
||
import { Callout } from 'nextra-theme-docs'; | ||
|
||
# AI Core (experimental) | ||
|
||
<Callout> | ||
AI Core is an experimental API. The API is not yet stable and may change in | ||
the future without a major version bump. | ||
</Callout> | ||
|
||
The Vercel AI SDK offers a unified way of calling large language models (LLMs) that can be used with any [AI Core-compatible provider](/docs/ai-core#language-model-interface). | ||
It provides the following AI functions: | ||
|
||
- `generateText` [ [API](/docs/ai-core/generate-text) ] - Generate text and [call tools](/docs/ai-core/tools). | ||
This function is ideal for non-interactive use cases such as automation tasks where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools. | ||
- `streamText` [ [API](/docs/ai-core/stream-text) ] - Stream text and call tools. | ||
You can use the `streamText` function for interactive use cases such as chat bots (with and without tool usage), and text and code diff streaming in UIs. You can also generate UI components with tools (see [Generative UI](/docs/concepts/ai-rsc)). | ||
- `generateObject` [ [API](/docs/ai-core/generate-object) ] - Generate a typed, structured object that matches a [Zod](https://zod.dev/) schema. | ||
You can use this function to force the language model to return structured data, e.g. for information extraction, synthetic data generation, or classification tasks. | ||
- `streamObject` [ [API](/docs/ai-core/stream-object) ] - Stream a structured object that matches a Zod schema. | ||
You can use this function to stream generated UIs in combination with React Server Components (see [Generative UI](/docs/concepts/ai-rsc)). | ||
|
||
The AI functions share the same [prompt structure](/docs/ai-core/prompt) and the same [common settings](/docs/ai-core/settings). | ||
The model is created using a language model provider, e.g. the [OpenAI provider](/docs/ai-core/openai) . | ||
Here is a simple example for `generateText`: | ||
|
||
```ts | ||
import { experimental_generateText } from 'ai'; | ||
import { openai } from 'ai/openai'; | ||
|
||
const { text } = await experimental_generateText({ | ||
model: openai.chat('gpt-3.5-turbo'), | ||
prompt: 'Invent a new holiday and describe its traditions.', | ||
}); | ||
``` | ||
|
||
## Language Model Interface | ||
|
||
Providers need to provide an implementation of the language model interface to be compatible with the AI SDK. | ||
The AI SDK contains the following providers: | ||
|
||
- [OpenAI Provider](/docs/ai-core/openai) (`ai/openai`) | ||
- [Mistral Provider](/docs/ai-core/mistral) (`ai/mistral`) | ||
|
||
 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
--- | ||
title: Mistral Provider | ||
--- | ||
|
||
import { Callout } from 'nextra-theme-docs'; | ||
|
||
# Mistral Provider | ||
|
||
The Mistral provider contains language model support for the Mistral chat API. | ||
It creates language model objects that can be used with the `generateText`, `streamText`, `generateObject`, and `streamObject` AI functions. | ||
|
||
## Provider Instance | ||
|
||
You can import `Mistral` from `ai/mistral` and initialize a provider instance with various settings: | ||
|
||
```ts | ||
import { Mistral } from 'ai/mistral'; | ||
|
||
const mistral = new Mistral({ | ||
baseUrl: '', // optional base URL for proxies etc. | ||
apiKey: '', // optional API key, default to env property MISTRAL_API_KEY | ||
}); | ||
``` | ||
|
||
The AI SDK also provides a shorthand `mistral` import with a Mistral provider instance that uses defaults: | ||
|
||
```ts | ||
import { mistral } from 'ai/mistral'; | ||
``` | ||
|
||
## Chat Models | ||
|
||
You can create models that call the [Mistral chat API](https://docs.mistral.ai/api/#operation/createChatCompletion) using the `.chat()` factory method. | ||
The first argument is the model id, e.g. `mistral-large-latest`. | ||
Some Mistral chat models support tool calls. | ||
|
||
```ts | ||
const model = mistral.chat('mistral-large-latest'); | ||
``` | ||
|
||
Mistral chat models also support additional model settings that are not part of the [standard call settings](/docs/ai-core/settings). | ||
You can pass them as an options argument: | ||
|
||
```ts | ||
const model = mistral.chat('mistral-large-latest', { | ||
safePrompt: true, // optional safety prompt injection | ||
}); | ||
``` |
Oops, something went wrong.