You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is already Anthropic and OpenAI provider support and it allows configuring which model to use. I would like to now add Ollama support and it should default to a local model, but you should be able to configure the ollamaBaseUrl as a config parameter via "config set ollamaBaseUrl [value]" command line. No need to support image generation or image analysis to start. The test model on Ollama to use is llama3-groq-tool-use, it is already downloaded and available via Ollama on the local development machine.
You can import the default provider instance ollama from ollama-ai-provider:
import { ollama } from 'ollama-ai-provider';
If you need a customized setup, you can import createOllama from ollama-ai-provider and create a provider instance with your settings:
import { createOllama } from 'ollama-ai-provider';
const ollama = createOllama({
// optional settings, e.g.
baseURL: 'https://api.ollama.com',
});
You can use the following optional settings to customize the Ollama provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is http://localhost:11434/api.
You can create models that call the Ollama Chat Completion API using the provider instance. The first argument is the model id, e.g. phi3. Some models have multi-modal capabilities.
const model = ollama('phi3');
You can find more models on the Ollama Library homepage.
There is already Anthropic and OpenAI provider support and it allows configuring which model to use. I would like to now add Ollama support and it should default to a local model, but you should be able to configure the ollamaBaseUrl as a config parameter via "config set ollamaBaseUrl [value]" command line. No need to support image generation or image analysis to start. The test model on Ollama to use is llama3-groq-tool-use, it is already downloaded and available via Ollama on the local development machine.
Here is the docs from:
https://sdk.vercel.ai/providers/community-providers/ollama
Setup
The Ollama provider is available in the ollama-ai-provider module. You can install it with
pnpm add ollama-ai-provider
Provider Instance
You can import the default provider instance ollama from ollama-ai-provider:
import { ollama } from 'ollama-ai-provider';
If you need a customized setup, you can import createOllama from ollama-ai-provider and create a provider instance with your settings:
import { createOllama } from 'ollama-ai-provider';
const ollama = createOllama({
// optional settings, e.g.
baseURL: 'https://api.ollama.com',
});
You can use the following optional settings to customize the Ollama provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is http://localhost:11434/api.
headers Record<string,string>
Custom headers to include in the requests.
Language Models
You can create models that call the Ollama Chat Completion API using the provider instance. The first argument is the model id, e.g. phi3. Some models have multi-modal capabilities.
const model = ollama('phi3');
You can find more models on the Ollama Library homepage.
Model Capabilities
This provider is capable of generating and streaming text and objects. Object generation may fail depending on the model used and the schema used.
The following models have been tested with image inputs:
llava
llava-llama3
llava-phi3
moondream
The text was updated successfully, but these errors were encountered: