Think of it like a control panel where you can:
- Store your API keys and settings for AI services
- Share these settings with other Obsidian plugins
- Avoid entering the same AI settings multiple times
The plugin itself doesn't do any AI processing - it just helps other plugins connect to AI services more easily.

- Ollama
- OpenAI
- OpenAI compatible API
- OpenRouter
- Google Gemini
- LM Studio
- Groq
- Fully encapsulated API for working with AI providers
- Develop AI plugins faster without dealing directly with provider-specific APIs
- Easily extend support for additional AI providers in your plugin
- Available in 4 languages: English, Chinese, German, and Russian (more languages coming soon)
This plugin is available in the Obsidian community plugin store https://obsidian.md/plugins?id=ai-providers
You can install this plugin via BRAT: pfrankov/obsidian-ai-providers
- Install Ollama.
- Install Gemma 2
ollama pull gemma2
or any preferred model from the library. - Select
Ollama
inProvider type
- Click refresh button and select the model that suits your needs (e.g.
gemma2
)
Additional: if you have issues with streaming completion with Ollama try to set environment variable OLLAMA_ORIGINS
to *
:
- For MacOS run
launchctl setenv OLLAMA_ORIGINS "*"
. - For Linux and Windows check the docs.
- Select
OpenAI
inProvider type
- Set
Provider URL
tohttps://api.openai.com/v1
- Retrieve and paste your
API key
from the API keys page - Click refresh button and select the model that suits your needs (e.g.
gpt-4o
)
There are several options to run local OpenAI-like server:
- Open WebUI
- llama.cpp
- llama-cpp-python
- LocalAI
- Obabooga Text generation web UI
- LM Studio
- ...maybe more
- Select
OpenRouter
inProvider type
- Set
Provider URL
tohttps://openrouter.ai/api/v1
- Retrieve and paste your
API key
from the API keys page - Click refresh button and select the model that suits your needs (e.g.
anthropic/claude-3.7-sonnet
)
- Select
Google Gemini
inProvider type
- Set
Provider URL
tohttps://generativelanguage.googleapis.com/v1beta/openai
- Retrieve and paste your
API key
from the API keys page - Click refresh button and select the model that suits your needs (e.g.
gemini-1.5-flash
)
- Select
LM Studio
inProvider type
- Set
Provider URL
tohttp://localhost:1234/v1
- Click refresh button and select the model that suits your needs (e.g.
gemma2
)
- Select
Groq
inProvider type
- Set
Provider URL
tohttps://api.groq.com/openai/v1
- Retrieve and paste your
API key
from the API keys page - Click refresh button and select the model that suits your needs (e.g.
llama3-70b-8192
)
Docs: How to integrate AI Providers in your plugin.
- Docs for devs
- Ollama context optimizations
- Image processing support
- OpenRouter Provider support
- Gemini Provider support
- LM Studio Provider support
- Groq Provider support
- Anthropic Provider support
- Shared embeddings to avoid re-embedding the same documents multiple times
- Spanish, Italian, French, Dutch, Portuguese, Japanese, Korean translations
- Incapsulated basic RAG search with optional BM25 search
- Local GPT that assists with local AI for maximum privacy and offline access.
- Colored Tags that colorizes tags in distinguishable colors.