Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Ollama support via the Vercel AI SDK. #85

Closed
bhouston opened this issue Mar 4, 2025 · 0 comments · Fixed by #87
Closed

Add Ollama support via the Vercel AI SDK. #85

bhouston opened this issue Mar 4, 2025 · 0 comments · Fixed by #87

Comments

@bhouston
Copy link
Member

bhouston commented Mar 4, 2025

There is already Anthropic and OpenAI provider support and it allows configuring which model to use. I would like to now add Ollama support and it should default to a local model, but you should be able to configure the ollamaBaseUrl as a config parameter via "config set ollamaBaseUrl [value]" command line. No need to support image generation or image analysis to start. The test model on Ollama to use is llama3-groq-tool-use, it is already downloaded and available via Ollama on the local development machine.

Here is the docs from:

https://sdk.vercel.ai/providers/community-providers/ollama

Setup

The Ollama provider is available in the ollama-ai-provider module. You can install it with

pnpm add ollama-ai-provider
Provider Instance

You can import the default provider instance ollama from ollama-ai-provider:

import { ollama } from 'ollama-ai-provider';
If you need a customized setup, you can import createOllama from ollama-ai-provider and create a provider instance with your settings:

import { createOllama } from 'ollama-ai-provider';

const ollama = createOllama({
// optional settings, e.g.
baseURL: 'https://api.ollama.com',
});
You can use the following optional settings to customize the Ollama provider instance:

baseURL string

Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is http://localhost:11434/api.

headers Record<string,string>

Custom headers to include in the requests.

Language Models

You can create models that call the Ollama Chat Completion API using the provider instance. The first argument is the model id, e.g. phi3. Some models have multi-modal capabilities.

const model = ollama('phi3');
You can find more models on the Ollama Library homepage.

Model Capabilities

This provider is capable of generating and streaming text and objects. Object generation may fail depending on the model used and the schema used.

The following models have been tested with image inputs:

llava
llava-llama3
llava-phi3
moondream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant